Red Hat Bugzilla – Bug 1002194
Mounting and writing to 2.0 u6 servers from 2.1 clients causes segfault.
Last modified: 2015-03-23 03:40:40 EDT
Description of problem:
When mount my 2.0u6 servers from a 2.1 client and write a file I see a segfault on all peers in the cluster:
Core was generated by `/usr/sbin/glusterfsd -s localhost --volfile-id testvol.192.168.122.12.bricks-br'.
Program terminated with signal 11, Segmentation fault.
And on the client I see:
# dd if=/dev/zero of=/gluster-mount/test.file bs=1024k count=1000
dd: writing `/gluster-mount/test.file': Transport endpoint is not connected
dd: closing output file `/gluster-mount/test.file': Transport endpoint is not connected
Version-Release number of selected component (if applicable):
Server - glusterfs-126.96.36.199rhs-1.el6rhs.x86_64
Client - glusterfs-188.8.131.52rhs-1.el6_4.x86_64
Steps to Reproduce:
1. Create a distributed replicated volume on 2.0u6 servers.
2. Mount the volume on a 2.1 client.
3. Run: dd if=/dev/zero of=/gluster-mount/test.file bs=1024k count=1000
Attaching BT and core to BZ.
Created attachment 791439 [details]
Created attachment 791440 [details]
This is dependent on bug 999944. For which we haven't yet have a solution. Looks like we either do a patch for rhs-2.1 clients to work fine with rhs-2.0 servers, or may do a patch for rhs-2.0 servers to handle rhs-2.1 clients better.
Not a blocker for U6 as RHS2.1 is not out it IMO.
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) , hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version.