Description of problem: I've upgraded glusterfs3.4.3 to glusterfs3.5.3-1 and I'm trying to update the op-version to the new format http://gluster.org/community/documentation/index.php/OperatingVersions I have a volume devstatic that shows this in /var/lib/glusterd/vols/devstatic/info cat /var/lib/glusterd/vols/devstatic/info type=2 count=4 status=1 sub_count=2 stripe_count=1 replica_count=2 version=64 transport-type=0 volume-id=[removed] username=[removed] password=[removed] op-version=2 client-op-version=2 brick-0=omhq1826:-static-content brick-1=omdx1448:-static-content brick-2=omhq1832:-static-content brick-3=omdx14f0:-static-content executing # gluster volume set devstatic cluster.op-version 30501 volume set: failed: option : cluster.op-version does not exist Did you mean cluster.eager-lock? I've seen other cases where its just "op-version" and that gives me the same rejection. # gluster volume set devstatic op-version 30501 volume set: failed: option : op-version does not exist Did you mean compression? Version-Release number of selected component (if applicable): # glusterfs --version glusterfs 3.5.3 built on Nov 13 2014 11:06:04 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. I even created a new volume and the volume still says op-version=2 How reproducible: everytime Actual results: /var/lib/glusterd/vols/VOLNAME/info still contains op-version=2 client-op-version=2 Expected results: glusterfs3.5.3-1 I would expect the op-version=30501 Additional info:
This bug is getting closed because the 3.5 is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.