Description of problem: ======================= When a volume snapshot is created, the op-versio of snapshoted volume is set to 4. For example, if the op-version of the volume is 3 and a snapshot is taken of a volume, than the op-version of snap volume is set to 4. Ideally it should be what is set for a volume. op-version of a volume before snapshot is created is 2: ======================================================= [root@inception ~]# cat /var/lib/glusterd/vols/vol2/info | grep op-version op-version=3 client-op-version=2 [root@inception ~]# Created a snapshot of a volume and checked its op-version it is 4: ================================================================== [root@inception ~]# gluster snapshot create snapshot1 vol2 snapshot create: success: Snap snapshot1 created successfully [root@inception ~]# cat /var/lib/glusterd/snaps/snapshot1/25861da271434f86a896e6101ca31345/info | grep op-version op-version=4 client-op-version=2 [root@inception ~]# Restore a volume to snapshot1 and check the op-version of the volume it is 4: ============================================================================= [root@inception ~]# gluster volume stop vol2 Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y volume stop: vol2: success [root@inception ~]# gluster snapshot restore snapshot1 Snapshot restore: snapshot1: Snap restored successfully [root@inception ~]# cat /var/lib/glusterd/vols/vol2/info | grep op-version op-version=4 client-op-version=2 [root@inception ~]# The impact of this is unknown to me but what is discussed is that the client might not be able to reconnect. Raising the bug with high priority. Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.6.0.12-1.el6rhs.x86_64 How reproducible: ================= 1/1 Steps to Reproduce: =================== 1. Create and start a volume 2. Check the volume op-version 3. Create a snapshot of a volume 4. Check the snapshot volume op-version Actual results: =============== op-version of snap volume is set to 4 whereas the op-version of a volume is different (could be 2/3) Expected results: ================= The op-version of snap volume should not be bumped up, it should remain as of original volume
Upstream patch in review at http://review.gluster.org/#/c/7986/
Fix at https://code.engineering.redhat.com/gerrit/26663
Verified with build: glusterfs-3.6.0.16-1.el6rhs.x86_64 As per bz 1096425 comment 8, (the op-versions will be multi-digit integer values composed of the version numbers, instead of a simple incrementing integer. An X.Y.Z release will have XYZ as its op-version.) Initial op-version: 30000 ========================= [root@inception ~]# cat /var/lib/glusterd/vols/vol0/info | grep op-version op-version=30000 op-version of snapshot: 30000 ============================== [root@inception ~]# cat /var/lib/glusterd/snaps/snap1/a8097ef84c334a69a3a0ca2e1ebf9649/info type=2 count=4 status=1 sub_count=2 stripe_count=1 replica_count=2 version=2 transport-type=0 volume-id=a8097ef8-4c33-4a69-a3a0-ca2e1ebf9649 username=135b4afb-bdb7-4144-bffa-679cd1ad6df9 password=30d86d36-adbb-4f44-b393-5049d93f0622 op-version=30000 After Restore: 30000 ==================== [root@inception ~]# cat /var/lib/glusterd/vols/vol0/info | grep op-version op-version=30000 Moving the bug to verified state
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-1278.html