Red Hat Bugzilla – Bug 1108593
Libvirtd will crash while start a guest which DAC's seclabel type='none' in guest's xml
Last modified: 2015-03-05 02:37:33 EST
+++ This bug was initially created as a clone of Bug #1108590 +++ Description of problem: Libvirtd will crash while start a guest which DAC's seclabel type='none' in guest's xml Version-Release number of selected component (if applicable): kernel-2.6.32-466.el6.x86_64 qemu-kvm-rhev-0.12.1.2-2.426.el6.x86_64 libvirt-0.10.2-38.el6.x86_64 How reproducible: Steps to Reproduce: 1.Prepare a guest with the following content in the guest's xml #virsh dumpxml rhel6 -- <seclabel type='none' model='dac'/> 2.Start the guest #virsh start rhel6 3.Save the guest, the libvirtd crashed # virsh save rhel6 /tmp/rh6.save error: Failed to save domain rhel6 to /tmp/rh6.save error: End of file while reading data: Input/output error error: One or more references were leaked after disconnect from the hypervisor error: Failed to reconnect to the hypervisor Actual results: libvirtd crashed Expected results: The guest should be saved successfully and libvirtd shouldn't crash Additional info: --- Additional comment from zhenfeng wang on 2014-06-12 05:49:18 EDT --- (gdb) t a a bt Thread 11 (Thread 0x7fffec0f1700 (LWP 15423)): #0 0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007ffff7a0cd86 in virCondWait (c=<value optimized out>, m=<value optimized out>) at util/threads-pthread.c:117 #2 0x00007ffff7a0d353 in virThreadPoolWorker (opaque=<value optimized out>) at util/threadpool.c:103 #3 0x00007ffff7a0cba9 in virThreadHelper (data=<value optimized out>) at util/threads-pthread.c:161 #4 0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x0000003d146e8b6d in clone () from /lib64/libc.so.6 Thread 10 (Thread 0x7fffecaf2700 (LWP 15422)): #0 0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007ffff7a0cd86 in virCondWait (c=<value optimized out>, m=<value optimized out>) at util/threads-pthread.c:117 #2 0x00007ffff7a0d353 in virThreadPoolWorker (opaque=<value optimized out>) at util/threadpool.c:103 #3 0x00007ffff7a0cba9 in virThreadHelper (data=<value optimized out>) at util/threads-pthread.c:161 #4 0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0 ---Type <return> to continue, or q <return> to quit--- #5 0x0000003d146e8b6d in clone () from /lib64/libc.so.6 Thread 9 (Thread 0x7fffed4f3700 (LWP 15421)): #0 0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007ffff7a0cd86 in virCondWait (c=<value optimized out>, m=<value optimized out>) at util/threads-pthread.c:117 #2 0x00007ffff7a0d353 in virThreadPoolWorker (opaque=<value optimized out>) at util/threadpool.c:103 #3 0x00007ffff7a0cba9 in virThreadHelper (data=<value optimized out>) at util/threads-pthread.c:161 #4 0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x0000003d146e8b6d in clone () from /lib64/libc.so.6 Thread 8 (Thread 0x7fffedef4700 (LWP 15420)): #0 0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007ffff7a0cd86 in virCondWait (c=<value optimized out>, m=<value optimized out>) at util/threads-pthread.c:117 #2 0x00007ffff7a0d353 in virThreadPoolWorker (opaque=<value optimized out>) at util/threadpool.c:103 #3 0x00007ffff7a0cba9 in virThreadHelper (data=<value optimized out>) at util/threads-pthread.c:161 ---Type <return> to continue, or q <return> to quit--- #4 0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x0000003d146e8b6d in clone () from /lib64/libc.so.6 Thread 7 (Thread 0x7fffee8f5700 (LWP 15419)): #0 0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007ffff7a0cd86 in virCondWait (c=<value optimized out>, m=<value optimized out>) at util/threads-pthread.c:117 #2 0x00007ffff7a0d353 in virThreadPoolWorker (opaque=<value optimized out>) at util/threadpool.c:103 #3 0x00007ffff7a0cba9 in virThreadHelper (data=<value optimized out>) at util/threads-pthread.c:161 #4 0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x0000003d146e8b6d in clone () from /lib64/libc.so.6 Thread 6 (Thread 0x7fffef2f6700 (LWP 15418)): #0 0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007ffff7a0cd86 in virCondWait (c=<value optimized out>, m=<value optimized out>) at util/threads-pthread.c:117 #2 0x00007ffff7a0d353 in virThreadPoolWorker (opaque=<value optimized out>) at util/threadpool.c:103 #3 0x00007ffff7a0cba9 in virThreadHelper (data=<value optimized out>) ---Type <return> to continue, or q <return> to quit--- at util/threads-pthread.c:161 #4 0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x0000003d146e8b6d in clone () from /lib64/libc.so.6 Thread 5 (Thread 0x7fffefcf7700 (LWP 15417)): #0 0x0000003d14681451 in __strlen_sse2 () from /lib64/libc.so.6 #1 0x0000003d14681166 in strdup () from /lib64/libc.so.6 #2 0x00007ffff7a1036f in virParseOwnershipIds (label=0x0, uidPtr=0x7fffefcf679c, gidPtr=0x7fffefcf6798) at util/util.c:3446 #3 0x000000000044d63e in qemuOpenFile (driver=<value optimized out>, vm=<value optimized out>, path=0x7fffdc000cc0 "/tmp/rh6.save", oflags=577, needUnlink=0x7fffefcf682e, bypassSecurityDriver=0x7fffefcf682f) at qemu/qemu_driver.c:2751 #4 0x000000000046dc0c in qemuDomainSaveMemory (driver=0x7fffe400b860, vm=0x7fffe40d1470, path=0x7fffdc000cc0 "/tmp/rh6.save", domXML=<value optimized out>, compressed=0, was_running=<value optimized out>, flags=0, asyncJob=QEMU_ASYNC_JOB_SAVE) at qemu/qemu_driver.c:2941 #5 0x000000000046e39f in qemuDomainSaveInternal (driver=0x7fffe400b860, dom=0x7fffdc000c50, vm=0x7fffe40d1470, path=0x7fffdc000cc0 "/tmp/rh6.save", compressed=0, xmlin=0x0, flags=0) at qemu/qemu_driver.c:3087 #6 0x000000000046e93e in qemuDomainSaveFlags (dom=0x7fffdc000c50, ---Type <return> to continue, or q <return> to quit--- path=0x7fffdc000cc0 "/tmp/rh6.save", dxml=0x0, flags=0) at qemu/qemu_driver.c:3196 #7 0x00007ffff7ab4135 in virDomainSave (domain=0x7fffdc000c50, to=0x7fffdc000d30 "/tmp/rh6.save") at libvirt.c:2590 #8 0x000000000043cad6 in remoteDispatchDomainSave ( server=<value optimized out>, client=<value optimized out>, msg=<value optimized out>, rerr=0x7fffefcf6b80, args=<value optimized out>, ret=<value optimized out>) at remote_dispatch.h:4630 #9 remoteDispatchDomainSaveHelper (server=<value optimized out>, client=<value optimized out>, msg=<value optimized out>, rerr=0x7fffefcf6b80, args=<value optimized out>, ret=<value optimized out>) at remote_dispatch.h:4608 #10 0x00007ffff7aec2c2 in virNetServerProgramDispatchCall (prog=0x79fa70, server=0x796ff0, client=0x79ac00, msg=0x79b140) at rpc/virnetserverprogram.c:431 #11 virNetServerProgramDispatch (prog=0x79fa70, server=0x796ff0, client=0x79ac00, msg=0x79b140) at rpc/virnetserverprogram.c:304 #12 0x00007ffff7aeab0e in virNetServerProcessMsg (srv=<value optimized out>, client=0x79ac00, prog=<value optimized out>, msg=0x79b140) at rpc/virnetserver.c:170 #13 0x00007ffff7aeb1ac in virNetServerHandleJob ( jobOpaque=<value optimized out>, opaque=<value optimized out>) ---Type <return> to continue, or q <return> to quit--- at rpc/virnetserver.c:191 #14 0x00007ffff7a0d2bc in virThreadPoolWorker (opaque=<value optimized out>) at util/threadpool.c:144 #15 0x00007ffff7a0cba9 in virThreadHelper (data=<value optimized out>) at util/threads-pthread.c:161 #16 0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0 #17 0x0000003d146e8b6d in clone () from /lib64/libc.so.6 Thread 4 (Thread 0x7ffff06f8700 (LWP 15416)): #0 0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007ffff7a0cd86 in virCondWait (c=<value optimized out>, m=<value optimized out>) at util/threads-pthread.c:117 #2 0x00007ffff7a0d353 in virThreadPoolWorker (opaque=<value optimized out>) at util/threadpool.c:103 #3 0x00007ffff7a0cba9 in virThreadHelper (data=<value optimized out>) at util/threads-pthread.c:161 #4 0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x0000003d146e8b6d in clone () from /lib64/libc.so.6 Thread 3 (Thread 0x7ffff10f9700 (LWP 15415)): #0 0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 ---Type <return> to continue, or q <return> to quit--- #1 0x00007ffff7a0cd86 in virCondWait (c=<value optimized out>, m=<value optimized out>) at util/threads-pthread.c:117 #2 0x00007ffff7a0d353 in virThreadPoolWorker (opaque=<value optimized out>) at util/threadpool.c:103 #3 0x00007ffff7a0cba9 in virThreadHelper (data=<value optimized out>) at util/threads-pthread.c:161 #4 0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x0000003d146e8b6d in clone () from /lib64/libc.so.6 Thread 2 (Thread 0x7ffff1afa700 (LWP 15414)): #0 0x0000003d14a0b5bc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00007ffff7a0cd86 in virCondWait (c=<value optimized out>, m=<value optimized out>) at util/threads-pthread.c:117 #2 0x00007ffff7a0d353 in virThreadPoolWorker (opaque=<value optimized out>) at util/threadpool.c:103 #3 0x00007ffff7a0cba9 in virThreadHelper (data=<value optimized out>) at util/threads-pthread.c:161 #4 0x0000003d14a079d1 in start_thread () from /lib64/libpthread.so.0 #5 0x0000003d146e8b6d in clone () from /lib64/libc.so.6 Thread 1 (Thread 0x7ffff798c860 (LWP 15412)): #0 0x0000003d146df343 in poll () from /lib64/libc.so.6 ---Type <return> to continue, or q <return> to quit--- #1 0x00007ffff79fa72c in virEventPollRunOnce () at util/event_poll.c:615 #2 0x00007ffff79f9967 in virEventRunDefaultImpl () at util/event.c:247 #3 0x00007ffff7aea34d in virNetServerRun (srv=0x796ff0) at rpc/virnetserver.c:748 #4 0x0000000000423eb7 in main (argc=<value optimized out>, argv=<value optimized out>) at libvirtd.c:1229
Fixed upstream by: commit 7eb0ee175b278a4439cee65a7a554767f0be9cd1 Author: Ján Tomko <jtomko@redhat.com> AuthorDate: 2014-06-12 10:50:43 +0200 Commit: Ján Tomko <jtomko@redhat.com> CommitDate: 2014-06-12 12:01:35 +0200 Fix crash when saving a domain with type none dac label qemuDomainGetImageIds did not check if there was a label in the seclabel, thus crashing on <seclabel type='none' model='dac'/> https://bugzilla.redhat.com/show_bug.cgi?id=1108590 git describe: v1.2.5-112-g7eb0ee1
I could reproduce it with libvirt-1.1.1-29.el7.x86_64 as following steps: 1.Prepare a guest with the following content in the guest's xml #virsh dumpxml rhel6 -- <seclabel type='none' model='dac'/> 2.Start the guest #virsh start rhel6 3.Save the guest, the libvirtd crashed # virsh save rhel6 /tmp/rh6.save error: Failed to save domain rhel6 to /tmp/rh6.save error: End of file while reading data: Input/output error error: One or more references were leaked after disconnect from the hypervisor error: Failed to reconnect to the hypervisor Actual results: libvirtd crashed
Verified this issue with libvirt-1.2.7-1.el7.x86_64: 1.Prepare a guest with the following content in the guest's xml #virsh dumpxml rhel6 -- <seclabel type='none' model='dac'/> 2.Start the guest #virsh start rhel6 3.Save the guest: # virsh save rhel6 /tmp/rh6.save Domain rhel6 saved to /tmp/rh6.save
hi, Jan Tomko When I try to do regression about this bug on rhel7.1, I found guest start failed with following steps, could you please help check it? thanks in advance Version-Release number of selected component (if applicable): kernel-3.10.0-188.el7.x86_64 qemu-img-rhev-2.1.2-3.el7.x86_64 libvirt-1.2.8-5.el7.x86_64 How reproducible: Steps to Reproduce: 1. set selinux driver in qemu.conf security_driver = "selinux" 2. prepare a guest with below content in guest xml: #virsh dumpxml rhel6 -- <seclabel type='none' model='dac'/> 3. check the img for this guest: # ll -Z -rw-------. root root system_u:object_r:virt_image_t:s0 rhel6.img 4. start the guest failed, with error report: # virsh start rhel6 error: Failed to start domain rhel6 error: internal error: process exited while connecting to monitor: 2014-10-17T06:02:33.600551Z qemu-kvm: -drive file=/var/lib/libvirt/images/rhel6.img,if=none,id=drive-ide0-0-0,format=qcow2,cache=none: could not open disk image /var/lib/libvirt/images/rhel6.img: Could not open '/var/lib/libvirt/images/rhel6.img': 5. check the img again, still root/root: # ll -Z -rw-------. root root system_u:object_r:virt_image_t:s0 rhel6.img 6. guest with the same steps can start success on rhel6.6 libvirt-0.10.2-46.el6.x86_64, img can change id and group to qemu/qemu automatically I am wondering is it a new issue for selinux DAC mode on rhel7.1? vivian zhang
I don't think libvirt should change the uid and gid with type='none' model='dac' seclabel. So this is an issue of RHEL-6.6 libvirt, but not really worth fixing in my opinion. This was fixed by honoring the 'relabel' attribute for model='dac' labels for bug https://bugzilla.redhat.com/show_bug.cgi?id=999301
(In reply to Jan Tomko from comment #7) > I don't think libvirt should change the uid and gid with type='none' > model='dac' seclabel. So this is an issue of RHEL-6.6 libvirt, but not > really worth fixing in my opinion. > > This was fixed by honoring the 'relabel' attribute for model='dac' labels > for bug https://bugzilla.redhat.com/show_bug.cgi?id=999301 hi, Jan Thanks so much for your reply. I am fully agree with your opinion about type='none' model='dac' seclable did not change uid and gid on rhel7.1. But if customer set <seclabel type='none' mode='dac'/> for guest, for rhel7.1 guest will start fail, the result is different with rhel6.6. I am wondering it will really confused customer's understanding. So we hope to file a bug on RHEL6.7 to fix this issue. Hope for your reply. vivian zhang
Hi, Jan I can produce this bug on build libvirt-1.1.1-29.el7.x86_64 reset it on build libvirt-1.2.8-10.el7.x86_64 qemu-img-rhev-2.1.2-15.el7.x86_64 1.set selinux driver in qemu.conf security_driver = "selinux" 2. prepare a guest with below content in guest xml: #virsh dumpxml vm1 -- <seclabel type='none' model='dac'/> 3. check the img for this guest: # ll -Z /var/lib/libvirt/images/rhel65.img -rw-------. root root system_u:object_r:virt_image_t:s0 /var/lib/libvirt/images/rhel65.img 4. start the guest with error, this is an expected result on RHEL7.1 # virsh start vm1 error: Failed to start domain vm1 error: internal error: process exited while connecting to monitor: 2014-12-09T03:09:36.416068Z qemu-kvm: -drive file=/var/lib/libvirt/images/rhel65.img,if=none,id=drive-ide0-0-0,format=raw: could not open disk image /var/lib/libvirt/images/rhel65.img: Could not open '/var/lib/libvirt/images/rhel65.img': Permission denied 5. change the img label to qemu qemu manually # chown qemu:qemu /var/lib/libvirt/images/rhel65.img # ll -Z /var/lib/libvirt/images/rhel65.img -rw-------. qemu qemu system_u:object_r:virt_image_t:s0 /var/lib/libvirt/images/rhel65.img 6. check libvirtd process id # ps aux |grep libvirtd root 23919 0.0 0.1 1057124 18756 ? Ssl 10:46 0:00 /usr/sbin/libvirtd --listen root 24428 0.0 0.0 112644 960 pts/1 R+ 11:24 0:00 grep --color=auto libvirtd 7. start guest again, it will success # virsh start vm1 Domain vm1 started 8. save guest to file # virsh save vm1 /tmp/vm1.save Domain vm1 saved to /tmp/vm1.save 9. recheck libvirtd process, does not crashed anymore # ps aux |grep libvirtd root 23919 0.0 0.1 1122660 18848 ? Ssl 10:46 0:00 /usr/sbin/libvirtd --listen root 24471 0.0 0.0 112644 956 pts/1 S+ 11:25 0:00 grep --color=auto libvirtd so, according to commet7, do you think the above steps are valid to verify this bug on RHEL7.1, are these enough to move this bug to verified? But this is still a known issue for RHEL6.6, how could we follow it? We insist on file a bug about this on RHEL6.7, what is your opinion? vivian zhang
Yes, the steps are enough to verify it on RHEL7. As said in comment 7, the behavior in RHEL6 is not worth fixing.
since this bug works OK for rhel7, change to verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-0323.html