Created attachment 1087247 [details] logs from first node This is extract from mail that a user(David Robinson) sent on gluster-users. Description of problem: I have a replica pair setup that I was trying to upgrade from 3.7.4 to 3.7.5. After upgrading the rpm packages (rpm -Uvh *.rpm) and rebooting one of the nodes, I am now receiving the following: [root@frick01 log]# gluster volume status Staging failed on frackib01.corvidtec.com. Please check log file for details. [root@frick01 log]# gluster volume info Volume Name: gfs Type: Replicate Volume ID: abc63b5c-bed7-4e3d-9057-00930a2d85d3 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp,rdma Bricks: Brick1: frickib01.corvidtec.com:/data/brick01/gfs Brick2: frackib01.corvidtec.com:/data/brick01/gfs Options Reconfigured: storage.owner-gid: 100 server.allow-insecure: on performance.readdir-ahead: on server.event-threads: 4 client.event-threads: 4 How reproducible: Reported by multiple users. Logs have been attached.
Created attachment 1087248 [details] logs from second node
master branch patch link: http://review.gluster.org/#/c/12473/
REVIEW: http://review.gluster.org/12486 (glusterd: move new feature (tiering) enum op to the last of the array) posted (#1) for review on release-3.7 by Gaurav Kumar Garg (ggarg)
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.6, please open a new bug report. glusterfs-3.7.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/gluster-users/2015-November/024359.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user