Description of problem: When mount my 2.0u6 servers from a 2.1 client and write a file I see a segfault on all peers in the cluster: Core was generated by `/usr/sbin/glusterfsd -s localhost --volfile-id testvol.192.168.122.12.bricks-br'. Program terminated with signal 11, Segmentation fault. And on the client I see: # dd if=/dev/zero of=/gluster-mount/test.file bs=1024k count=1000 dd: writing `/gluster-mount/test.file': Transport endpoint is not connected dd: closing output file `/gluster-mount/test.file': Transport endpoint is not connected Version-Release number of selected component (if applicable): Server - glusterfs-3.3.0.13rhs-1.el6rhs.x86_64 Client - glusterfs-3.4.0.24rhs-1.el6_4.x86_64 How reproducible: Every time. Steps to Reproduce: 1. Create a distributed replicated volume on 2.0u6 servers. 2. Mount the volume on a 2.1 client. 3. Run: dd if=/dev/zero of=/gluster-mount/test.file bs=1024k count=1000 Actual results: Segfault. Expected results: No segfault. Additional info: Attaching BT and core to BZ.
Created attachment 791439 [details] Backtrace 1
Created attachment 791440 [details] Backtrace 2
This is dependent on bug 999944. For which we haven't yet have a solution. Looks like we either do a patch for rhs-2.1 clients to work fine with rhs-2.0 servers, or may do a patch for rhs-2.0 servers to handle rhs-2.1 clients better. Not a blocker for U6 as RHS2.1 is not out it IMO.
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version. [1] https://rhn.redhat.com/errata/RHSA-2014-0821.html