Hide Forgot
volume posix1 type storage/posix option directory /home/vortex/export/1 end-volume volume posix2 type storage/posix option directory /home/vortex/export/2 end-volume volume posix3 type storage/posix option directory /home/vortex/export/3 end-volume volume posix4 type storage/posix option directory /home/vortex/export/4 end-volume volume posix5 type storage/posix option directory /home/vortex/export/5 end-volume volume posix6 type storage/posix option directory /home/vortex/export/6 end-volume volume posix7 type storage/posix option directory /home/vortex/export/7 end-volume volume posix-namespace type storage/posix option directory /home/vortex/export/namespace end-volume volume unify type cluster/unify option scheduler alu option self-heal background option scheduler.alu.order disk-usage option namespace posix-namespace subvolumes posix1 posix2 posix3 posix4 posix5 posix6 posix7 end-volume volume locks type features/locks subvolumes unify end-volume volume export type performance/io-threads option thread-count 8 subvolumes locks end-volume volume runsposix type storage/posix option directory /home/vortex/export/runs end-volume volume runs type performance/io-threads # option autoscaling no option thread-count 4 subvolumes runslocks end-volume volume server type protocol/server option transport-type tcp option auth.addr.export.allow 127.0.0.1,212.36.79.106,212.36.79.114,192.168.2.*,192.168.1.* option auth.addr.runs.allow 127.0.0.1,212.36.79.106,212.36.79.114,192.168.2.*,192.168.1.* subvolumes export runs end-volume Volume file used when this crash was observed.
Reported by David Saez Padros on gluster-users mailing list: Core was generated by `/usr/sbin/glusterfsd -p /var/run/glusterfsd.pid -f /etc/glusterfs/glusterfsd.vo'. Program terminated with signal 11, Segmentation fault. #0 0x00007f7b568f30a8 in unify_open () from /usr/lib/glusterfs/2.0.4/xlator/cluster/unify.so (gdb) backtrace #0 0x00007f7b568f30a8 in unify_open () from /usr/lib/glusterfs/2.0.4/xlator/cluster/unify.so #1 0x00007f7b566e5945 in pl_open () from /usr/lib/glusterfs/2.0.4/xlator/features/locks.so #2 0x00007f7b564db850 in iot_open_wrapper () from /usr/lib/glusterfs/2.0.4/xlator/performance/io-threads.so #3 0x00007f7b57ca2ac1 in call_resume () from /usr/lib/libglusterfs.so.0 #4 0x00007f7b564d9ea8 in iot_worker_unordered () from /usr/lib/glusterfs/2.0.4/xlator/performance/io-threads.so #5 0x00007f7b5786bf9a in start_thread () from /lib/libpthread.so.0 #6 0x00007f7b575e056d in clone () from /lib/libc.so.6 #7 0x0000000000000000 in ?? ()
Hi Vijay, Let me know if I should spend time on fixing issues with Unify, instead we can get switch ported to dht, make that stable, and move unify to 'legacy/unify'. Please decide on this. Regards,
(In reply to comment #2) Hi Amar, We can accord lower priority to this. Let us concentrate our efforts on switch in dht and get that working. Regards, Vijay
adding dependency on bug-409, once committed, we can close all unify related bugs