Bug 1276029

Summary: Upgrading a subset of cluster to 3.7.5 leads to issues with glusterd commands
Product: [Community] GlusterFS Reporter: Raghavendra Talur <rtalur>
Component: glusterdAssignee: Gaurav Kumar Garg <ggarg>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.7.5CC: amukherj, anekkunt, bugs, gluster-bugs, hchiramm, smohan
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.7.6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1276643 1286294 (view as bug list) Environment:
Last Closed: 2015-11-17 06:01:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1275914, 1276643, 1277791    
Attachments:
Description Flags
logs from first node
none
logs from second node none

Description Raghavendra Talur 2015-10-28 12:46:21 UTC
Created attachment 1087247 [details]
logs from first node

This is extract from mail that a user(David Robinson) sent on gluster-users.

Description of problem:

I have a replica pair setup that I was trying to upgrade from 3.7.4 to 3.7.5. 
After upgrading the rpm packages (rpm -Uvh *.rpm) and rebooting one of the nodes, I am now receiving the following:
 
[root@frick01 log]# gluster volume status
Staging failed on frackib01.corvidtec.com. Please check log file for details.
 

[root@frick01 log]# gluster volume info
 
Volume Name: gfs
Type: Replicate
Volume ID: abc63b5c-bed7-4e3d-9057-00930a2d85d3
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp,rdma
Bricks:
Brick1: frickib01.corvidtec.com:/data/brick01/gfs
Brick2: frackib01.corvidtec.com:/data/brick01/gfs
Options Reconfigured:
storage.owner-gid: 100
server.allow-insecure: on
performance.readdir-ahead: on
server.event-threads: 4
client.event-threads: 4

How reproducible:
Reported by multiple users.

Logs have been attached.

Comment 1 Raghavendra Talur 2015-10-28 12:47:57 UTC
Created attachment 1087248 [details]
logs from second node

Comment 2 Anand Nekkunti 2015-10-30 14:27:08 UTC
master branch patch link: http://review.gluster.org/#/c/12473/

Comment 3 Vijay Bellur 2015-11-02 09:39:27 UTC
REVIEW: http://review.gluster.org/12486 (glusterd: move new feature (tiering) enum op to the last of the array) posted (#1) for review on release-3.7 by Gaurav Kumar Garg (ggarg)

Comment 4 Raghavendra Talur 2015-11-17 06:01:43 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.6, please open a new bug report.

glusterfs-3.7.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-November/024359.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user