Steps to Reproduce: ================= 1.have a 3 node cluster with 4 bricks each and make sure you have setup all the configurations for doing an update(channel registration, storage subscription etc) 2.create an dist-ec volume such that each node hosts max 2 bricks of one dht-subvol 3.now start the volume 4.enable quotas 5. mount volume using fuse on a client 6. trigger IO by downloading kernel image and start untarring it 7. Now bring down all gluster processes including bricks, deamons and glusterd using pkill glusterfs,glusterfsd and service glusterd stop 8. issue an yum update to update to latest packages including gluster ===>IOs should be happening without any issue 9. Now once updating is successful, start glusterd==>this too will work and IOs will still be happening. 10. Now that node3 is updated sucessfully, we now move on to next node say node2 11. Now kill glusterfs,glusterfsd and glusterd on node2 ====>You will now hit IO error or IO will stop abruptly. To recheck just create a dir and try to copy the kernel.tar to this dir and it will fail as below: [root@nchilaka-rhel6-fuseclient1-43-172 kern]# cp linux-4.6.2.tar.xz dir.6 cp: reading `linux-4.6.2.tar.xz': Input/output error
For glusterfs 3.7.5, feature/lock was not returning the lock count in xdata which ec requested. To solve a hang issue we modified the code in such a way that if there is any request of inodelk count in xdata, feature/lock will return the same using xdata. Now for glusterfs 3.7.9 ec is getting inodelk count in xdata from feature/lock. This issue arises when we do a rolling update from 3.7.5 to 3.7.9. For 4+2 volume running 3.7.5, if we update 2 nodes and after heal completion kill 2 older nodes, this problem can be seen. After update and killing of bricks, 2 nodes will return inodelk count while 2 older nodes will not contain it. During dictionary match , ec_dict_compare, this will lead to mismatch of answers and the file operation on mount point will fail with IO error.
REVIEW: http://review.gluster.org/14761 (cluster/ec: Match xdata key if present in both dicts) posted (#1) for review on master by Ashish Pandey (aspandey)
REVIEW: http://review.gluster.org/14761 (cluster/ec: Match xdata key if present in both dicts) posted (#2) for review on master by Ashish Pandey (aspandey)
REVIEW: http://review.gluster.org/14761 (cluster/ec: Handle absence of keys in some callback dict) posted (#3) for review on master by Ashish Pandey (aspandey)
COMMIT: http://review.gluster.org/14761 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit 558a45fa527b01ec81904150532a8b661c06ae8a Author: Ashish Pandey <aspandey> Date: Fri Jun 17 17:52:56 2016 +0530 cluster/ec: Handle absence of keys in some callback dict Problem: This issue arises when we do a rolling update from 3.7.5 to 3.7.9. For 4+2 volume running 3.7.5, if we update 2 nodes and after heal completion kill 2 older nodes, this problem can be seen. After update and killing of bricks, 2 nodes will return inodelk count key in dict while other 2 nodes will not have inodelk count in dict. This is also true for get-link-count. During dictionary match , ec_dict_compare, this will lead to mismatch of answers and the file operation on mount point will fail with IO error. Solution: Don't match inode, entry and link count keys while comparing two dictionaries. However, while combining the data in ec_dict_combine, go through all the dictionaries and select the maximum values received in different dicts for these keys. Change-Id: I33546e3619fe8f909286ee48fb0df2009cd3d22f BUG: 1347686 Signed-off-by: Ashish Pandey <aspandey> Reviewed-on: http://review.gluster.org/14761 Reviewed-by: Xavier Hernandez <xhernandez> Smoke: Gluster Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu> CentOS-regression: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.0, please open a new bug report. glusterfs-3.9.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2016-November/029281.html [2] https://www.gluster.org/pipermail/gluster-users/