I tried to set the read-subvolume to work around a different bug: $ gluster volume set public cluster.read-subvolume public-client-0 volume set: failed: One or more connected clients cannot support the feature being set. These clients need to be upgraded or disconnected before running this command again (Odd since read-subvolume has been in since 2.0) So I looked to see what clients were connected: $ gluster volume status public clients Client connections for volume public ---------------------------------------------- Brick : strabo:/data/gluster/fileshare/public Clients connected : 6 Hostname BytesRead BytesWritten -------- --------- ------------ 192.168.2.100:65530 124640 121188 192.168.2.101:65529 84624 1311300 192.168.2.8:65523 960 600 192.168.2.30:65497 964 600 192.168.2.3:65517 4076 3596 192.168.2.30:65518 29372 18264 ---------------------------------------------- Brick : nightshade:/data/gluster/fileshare/public Clients connected : 4 Hostname BytesRead BytesWritten -------- --------- ------------ 192.168.2.30:65501 4088 3616 192.168.2.8:65527 964 608 192.168.2.3:65514 960 608 192.168.2.30:65523 31840 20916 ---------------------------------------------- Which doesn't tell me which client could be the problem. Reporting the version, or at least the max-op-version, would be useful.
I've a patch http://review.gluster.org/#/c/11831/ which addresses this problem in a bit different way. This patch enhances the log/output for a unsupported client. The patch needs some review attention before it gets into the codebase.
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life. Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS. If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.
http://review.gluster.org/16303 addresses this issue and the same is already available in 3.10.0 release.
Thanks, Atin. I missed that it was released since someone opened a duplicate of this bug without checking to see if one existed.
*** This bug has been marked as a duplicate of bug 1409078 ***