+++ This bug was initially created as a clone of Bug #1276029 +++ This is extract from mail that a user(David Robinson) sent on gluster-users. Description of problem: I have a replica pair setup that I was trying to upgrade from 3.7.4 to 3.7.5. After upgrading the rpm packages (rpm -Uvh *.rpm) and rebooting one of the nodes, I am now receiving the following: [root@frick01 log]# gluster volume status Staging failed on frackib01.corvidtec.com. Please check log file for details. [root@frick01 log]# gluster volume info Volume Name: gfs Type: Replicate Volume ID: abc63b5c-bed7-4e3d-9057-00930a2d85d3 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp,rdma Bricks: Brick1: frickib01.corvidtec.com:/data/brick01/gfs Brick2: frackib01.corvidtec.com:/data/brick01/gfs Options Reconfigured: storage.owner-gid: 100 server.allow-insecure: on performance.readdir-ahead: on server.event-threads: 4 client.event-threads: 4 How reproducible: Reported by multiple users. Logs have been attached. --- Additional comment from Raghavendra Talur on 2015-10-28 08:47 EDT ---
REVIEW: http://review.gluster.org/12473 (glusterd: move new feature (tiering) enum op to the last of the array) posted (#1) for review on master by Gaurav Kumar Garg (ggarg)
REVIEW: http://review.gluster.org/12473 (glusterd: move new feature (tiering) enum op to the last of the array) posted (#2) for review on master by Gaurav Kumar Garg (ggarg)
REVIEW: http://review.gluster.org/12473 (glusterd: move new feature (tiering) enum op to the last of the array) posted (#3) for review on master by Gaurav Kumar Garg (ggarg)
REVIEW: http://review.gluster.org/12473 (glusterd: move new feature (tiering) enum op to the last of the array) posted (#4) for review on master by Gaurav Kumar Garg (ggarg)
REVIEW: http://review.gluster.org/12473 (glusterd: move new feature (tiering) enum op to the last of the array) posted (#5) for review on master by Atin Mukherjee (amukherj)
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user