Bug 996814
| Summary: | boot image with gluster native mode cant work with attach another device from local file system | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | mazhang <mazhang> | ||||||||
| Component: | qemu-kvm | Assignee: | Stefan Hajnoczi <stefanha> | ||||||||
| Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> | ||||||||
| Severity: | high | Docs Contact: | |||||||||
| Priority: | high | ||||||||||
| Version: | 6.5 | CC: | acathrow, asias, bsarathy, chayang, flang, juzhang, mazhang, michen, mkenneth, qzhang, tlavigne, vbellur, virt-maint | ||||||||
| Target Milestone: | rc | ||||||||||
| Target Release: | --- | ||||||||||
| Hardware: | Unspecified | ||||||||||
| OS: | Unspecified | ||||||||||
| Whiteboard: | |||||||||||
| Fixed In Version: | qemu-kvm-0.12.1.2-2.412.el6 | Doc Type: | Bug Fix | ||||||||
| Doc Text: | Story Points: | --- | |||||||||
| Clone Of: | |||||||||||
| : | 998832 (view as bug list) | Environment: | |||||||||
| Last Closed: | 2013-11-21 07:10:35 UTC | Type: | Bug | ||||||||
| Regression: | --- | Mount Type: | --- | ||||||||
| Documentation: | --- | CRM: | |||||||||
| Verified Versions: | Category: | --- | |||||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||||
| Embargoed: | |||||||||||
| Bug Depends On: | 998832 | ||||||||||
| Bug Blocks: | 1000882 | ||||||||||
| Attachments: |
|
||||||||||
|
Description
mazhang
2013-08-14 03:17:55 UTC
RHEL7 cant hit this problem with the same command line and glusterfs server. The version of qemu-kvm on RHEL7: qemu-kvm-common-1.5.2-3.el7.x86_64 ipxe-roms-qemu-20130517-1.gitc4bce43.el7.noarch qemu-img-1.5.2-3.el7.x86_64 qemu-kvm-tools-1.5.2-3.el7.x86_64 qemu-kvm-1.5.2-3.el7.x86_64 Update: 1) This works fine. No SIGUSR2 is observed. gdb --args $QEMU -L /usr/share/qemu-kvm -nographic -vnc :11 -enable-kvm -m 1024 \ -netdev tap,id=hn0,vhost=on -device virtio-net-pci,netdev=hn0 \ -drive file=$IMG,if=none,id=os -device virtio-blk-pci,drive=os,bootindex=1,scsi=off \ -drive file=gluster://gluster-server/vol/rhel6u5_2.qcow2,if=none,id=gfs0,cache=none,aio=native, -device virtio-blk-pci,drive=gfs0,bootindex=0 \ 2) If I switch the order of the two disks, SIGUSR2 is observed sometimes. gdb --args $QEMU -L /usr/share/qemu-kvm -nographic -vnc :11 -enable-kvm -m 1024 \ -netdev tap,id=hn0,vhost=on -device virtio-net-pci,netdev=hn0 \ -drive file=gluster://gluster-server/vol/rhel6u5_2.qcow2,if=none,id=gfs0,cache=none,aio=native, -device virtio-blk-pci,drive=gfs0,bootindex=0 \ -drive file=$IMG,if=none,id=os -device virtio-blk-pci,drive=os,bootindex=1,scsi=off \ Program received signal SIGUSR2, User defined signal 2.
>
> Program received signal SIGUSR2, User defined signal 2.
in scenario 2, after poweroff guest, sometimes hit the problem as well.
Unmounting pipe file systems: [ OK ]
Unmounting file systems: [ OK ]
init: Re-executing /sbin/init
Halting system...
ACPI: Preparing to enter system sleep state S5
Disabling non-boot CPUs ...
Power down.
[New Thread 0x7fff751f4700 (LWP 3824)]
Program received signal SIGUSR2, User defined signal 2.
[Switching to Thread 0x7fffeebf6700 (LWP 3799)]
0x00007ffff772798e in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
(gdb) bt
#0 0x00007ffff772798e in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1 0x00007ffff4e5cb8f in syncenv_task () from /usr/lib64/libglusterfs.so.0
#2 0x00007ffff4e60d10 in syncenv_processor () from /usr/lib64/libglusterfs.so.0
#3 0x00007ffff77239d1 in start_thread () from /lib64/libpthread.so.0
#4 0x00007ffff517ca8d in clone () from /lib64/libc.so.6
Program received signal SIGUSR2, User defined signal 2. [Switching to Thread 0x7fffeebf4700 (LWP 7691)] pthread_cond_timedwait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:239 239 62: movq %rax, %r14 (gdb) bt #0 pthread_cond_timedwait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:239 #1 0x00007ffff4c56b8f in syncenv_task (proc=0x7ffff8704c20) at syncop.c:306 #2 0x00007ffff4c5ad10 in syncenv_processor (thdata=0x7ffff8704c20) at syncop.c:384 #3 0x00007ffff77219d1 in start_thread (arg=0x7fffeebf4700) at pthread_create.c:301 #4 0x00007ffff4f76a8d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115 Created attachment 788321 [details]
gluster.sigusr2
With patch in Comment6 applied on Gluster. The issue is gone. The problem is that Gluster thread does not block the SIGUSR2 which is used by qemu-kvm's posix-aio-compat.c code. Created attachment 788333 [details]
Block SIGUSR2 only
This also works.
Created attachment 788353 [details]
Block all signals
This works.
QEMU can not eliminate signals completely. In order to make gluster work with QEMU, gluster threads need to block them. It's necessary for threads to carefully block signals that are not essential to them. Reproduce this bug with glusterfs-3.4.0.19rhs-2.el6.x86_64 and qemu-kvm-0.12.1.2-2.411.el6.x86_64. Update glusterfs and qemu-kvm package and re-test 5 times not hit this problem. host: RHEL6.5-Snapshot-2.0 [root@m2 ~]# rpm -qa |grep qemu qemu-kvm-0.12.1.2-2.412.el6.x86_64 gpxe-roms-qemu-0.9.7-6.10.el6.noarch qemu-kvm-debuginfo-0.12.1.2-2.412.el6.x86_64 qemu-img-0.12.1.2-2.412.el6.x86_64 qemu-kvm-tools-0.12.1.2-2.412.el6.x86_64 [root@m2 ~]# rpm -qa |grep glusterfs glusterfs-3.4.0.34rhs-1.el6.x86_64 glusterfs-api-3.4.0.34rhs-1.el6.x86_64 glusterfs-libs-3.4.0.34rhs-1.el6.x86_64 rhs: RHS-2.1-20130830.n.0 [root@rhs brick1]# rpm -qa |grep glusterfs glusterfs-server-3.4.0.34rhs-1.el6rhs.x86_64 samba-glusterfs-3.6.9-160.3.el6rhs.x86_64 glusterfs-3.4.0.34rhs-1.el6rhs.x86_64 glusterfs-fuse-3.4.0.34rhs-1.el6rhs.x86_64 glusterfs-api-3.4.0.34rhs-1.el6rhs.x86_64 glusterfs-geo-replication-3.4.0.34rhs-1.el6rhs.x86_64 glusterfs-libs-3.4.0.34rhs-1.el6rhs.x86_64 glusterfs-rdma-3.4.0.34rhs-1.el6rhs.x86_64 steps and command line refer to comment#0 and comment#3. Result: qemu-kvm run well, no crash no SIGUSR2. so this bug has been fixed. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2013-1553.html |