Bug 998832

Summary: threads created by gluster should block signals which are not used by gluster itself
Product: Red Hat Enterprise Linux 6 Reporter: Asias He <asias>
Component: glusterfsAssignee: Anand Avati <aavati>
Status: CLOSED ERRATA QA Contact: Sachidananda Urs <surs>
Severity: high Docs Contact:
Priority: high    
Version: 6.5CC: acathrow, amarts, areis, asias, barumuga, bsarathy, chayang, chrisw, flang, grajaiya, juzhang, kparthas, mazhang, mkenneth, pbonzini, qzhang, shaines, tlavigne, vbellur, virt-maint, xigao
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.4.0.34rhs-1.el6 Doc Type: Bug Fix
Doc Text:
Cause: as 'glusterfs-api' is an library consumed by applications, it should not handle all the signals. Consequence: the signals which are sent for the application would be wrongly interpreted by glusterfs code. Fix: glusterfs-api should block signals which are not handled by them. Result: now the applications signal handling works properly.
Story Points: ---
Clone Of: 996814
: 1010337 1011662 (view as bug list) Environment:
Last Closed: 2013-11-21 12:00:41 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 996814, 1010337, 1011662    

Comment 3 Paolo Bonzini 2013-08-27 12:07:31 UTC
Gluster should really block _all_ signals except the per-thread ones (SIGSEGV, SIGBUS, SIGILL, SIGSYS, SIGFPE).

Comment 7 Paolo Bonzini 2013-09-20 14:02:06 UTC
> IIRC, not only the APIs we call creates threads, Gluster itself creates extra > threads. We can not mask them all in QEMU side.

Yes, but Gluster can create threads only from Gluster threads that were created before, or from APIs that we call ourselves.  So if all QEMU->Gluster entry points block signals, all thread created by Gluster will have the right mask.

Comment 10 Gowrishankar Rajaiyan 2013-10-09 13:51:11 UTC
[root@dhcp201-162 ~]# /usr/libexec/qemu-kvm -M pc -cpu SandyBridge -m 2G -smp 1,sockets=2,cores=2,threads=1,maxcpus=2 -enable-kvm -name win2012 -drive file=gluster://10.65.201.191/vmstore/win2012r2-free-install.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,werror=stop,rerror=stop,aio=threads -drive file=/home/rhel6u5.qcow2,if=none,id=drive-scsi-disk,format=qcow2,cache=none,werror=stop,rerror=stop
VNC server running on `127.0.0.1:5901'


/usr/libexec/qemu-kvm -M pc -cpu SandyBridge -m 2G -smp
1,sockets=2,cores=2,threads=1,maxcpus=2 -enable-kvm -name win2012 -drive
file=gluster://10.65.201.191/vmstore/win2012r2-free-install.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,werror=stop,rerror=stop,aio=threads
-drive
file=/root/windows.iso,if=none,media=cdrom,id=drive-ide0,readonly=on,format=raw
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0,id=ide0,bootindex=1
VNC server running on `127.0.0.1:5900'


qemu-kvm launches the instance and no crash detected.

Version: 
* qemu-kvm-0.12.1.2-2.411.el6.x86_64
* glusterfs-3.4.0.34rhs-1.el6.x86_64
* kernel-2.6.32-421.el6.x86_64

Comment 11 errata-xmlrpc 2013-11-21 12:00:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1641.html