Bug 1122816 - [SNAPSHOT]: In mixed cluster with RHS 2.1 U2 & RHS 3.0, newly created volume should not contain snapshot related options displayed in 'gluster volume info'
Summary: [SNAPSHOT]: In mixed cluster with RHS 2.1 U2 & RHS 3.0, newly created volume ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.5.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Vijaikumar Mallikarjuna
QA Contact:
URL:
Whiteboard: SNAPSHOT
Depends On: 1113852
Blocks: 1145068
TreeView+ depends on / blocked
 
Reported: 2014-07-24 07:20 UTC by Vijaikumar Mallikarjuna
Modified: 2016-05-11 22:47 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.7.0
Doc Type: Bug Fix
Doc Text:
Clone Of: 1113852
: 1145068 (view as bug list)
Environment:
Last Closed: 2015-05-14 17:26:48 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Vijaikumar Mallikarjuna 2014-07-24 07:20:45 UTC
+++ This bug was initially created as a clone of Bug #1113852 +++

Description of problem:
-----------------------
Gluster volumes created in a hybrid(mixed) cluster of RHS 2.1U2 & RHS 3.0, has snapshot related options shown up in 'gluster volume info' output

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
RHS 3.0 ( glusterfs-3.6.0.22-1.el6rhs )

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Peer probe a Denali Node (RHS 3.0) from Corbett ( RHS 2.1 U2 )
(or)
Create a cluster of 2 Nodes with RHS 2.1U2 and upgrade one node to RHS 3.0

2. Create a new volume ( of any type )

3. Check 'gluster volume info' output

Actual results:
---------------
gluster volume snapshot related options are shown in 'gluster volume info' output

Expected results:
-----------------
op-version of the cluster is set to 2 ( since one of the node is in RHS 2.1U2), so snapshot related options should not be made available on the volume

Additional info:
----------------
1. 'gluster volume info' on RHS 3.0 shows the volume snapshot related options

[root@rhss3 ~]# gluster v i                                                                                                                                                                                    
                                                                                                                                                                                                               
Volume Name: dvol                                                                                                                                                                                              
Type: Distribute                                                                                                                                                                                               
Volume ID: 761e4c4c-76f3-4b37-af04-1e465a0a4139                                                                                                                                                                
Status: Started                                                                                                                                                                                                
Snap Volume: no                                                                                                                                                                                                
Number of Bricks: 1                                                                                                                                                                                            
Transport-type: tcp                                                                                                                                                                                            
Bricks:
Brick1: 10.70.37.136:/rhs/brick1/b1
Options Reconfigured:
snap-max-hard-limit: 256
snap-max-soft-limit: 90
auto-delete: disable

2. 'gluster volume info' on RHS 2.1U2 doesn't have volume snapshot related options

[root@corbett ~]# gluster v i                                                                                                                                                                                  
                                                                                                                                                                                                               
Volume Name: dvol                                                                                                                                                                                              
Type: Distribute                                                                                                                                                                                               
Volume ID: 761e4c4c-76f3-4b37-af04-1e465a0a4139                                                                                                                                                                
Status: Started                                                                                                                                                                                                
Number of Bricks: 1                                                                                                                                                                                            
Transport-type: tcp                                                                                                                                                                                            
Bricks:                                                                                                                                                                                                        
Brick1: 10.70.37.136:/rhs/brick1/b1

--- Additional comment from Sayan Saha on 2014-06-29 22:04:21 EDT ---

This is only relevant if we see it while upgrading from 2.1 to 3.0 not between 2 intermediate QE builds of 3.0.

--- Additional comment from Vijaikumar Mallikarjuna on 2014-07-24 03:15:47 EDT ---

With the patch http://review.gluster.org/#/c/8191/.  
snap-max-hard-limit, snap-max-soft-limit and auto-delete values will not be shown if they are not set explicitly.

Currently we get below error message when one of the node in cluster is running glusterd version less than 3.6:
root@rh1:~/workspace/git/glusterfs # gluster snapshot create snap1 vol1
snapshot create: failed: Another transaction is in progress Please try again after sometime.
Snapshot command failed 


I will submit another patch to display correct error message when snapshot operation is performed on a cluster with op-version less than 3.6

Comment 1 Anand Avati 2014-07-24 07:21:51 UTC
REVIEW: http://review.gluster.org/8371 (glusterd/snapshot: Print correct error message  on cli for snapshot operation performed on a cluster with op-version less than 30600) posted (#1) for review on master by Vijaikumar Mallikarjuna (vmallika@redhat.com)

Comment 2 Anand Avati 2014-07-24 12:03:44 UTC
COMMIT: http://review.gluster.org/8371 committed in master by Kaushal M (kaushal@redhat.com) 
------
commit ddd132a3b20d650edbda318c773b6d54a04f6675
Author: Vijaikumar M <vmallika@redhat.com>
Date:   Thu Jul 24 12:47:04 2014 +0530

    glusterd/snapshot: Print correct error message  on cli
    for snapshot operation performed on a cluster with
    op-version less than 30600
    
    Currently we get error message as  on cli 'Another transaction is in progress
    Please try again after sometime' when a snapshot operation is performed
    on a cluster with op-version less than 30600.
    We need to print the correct error message in this case.
    
    Change-Id: I5f144428d928393c3796bde96ce6e3a40fca8141
    BUG: 1122816
    Signed-off-by: Vijaikumar M <vmallika@redhat.com>
    Reviewed-on: http://review.gluster.org/8371
    Reviewed-by: Avra Sengupta <asengupt@redhat.com>
    Reviewed-by: Sachin Pandit <spandit@redhat.com>
    Reviewed-by: Atin Mukherjee <amukherj@redhat.com>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Kaushal M <kaushal@redhat.com>

Comment 3 Niels de Vos 2015-05-14 17:26:48 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 4 Niels de Vos 2015-05-14 17:35:30 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 5 Niels de Vos 2015-05-14 17:37:52 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 6 Niels de Vos 2015-05-14 17:42:54 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.