Bug 1347686 - IO error seen with Rolling or non-disruptive upgrade of an distribute-disperse(EC) volume from 3.7.5 to 3.7.9
Summary: IO error seen with Rolling or non-disruptive upgrade of an distribute-dispers...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
Assignee: Ashish Pandey
QA Contact:
URL:
Whiteboard:
Depends On: 1347251
Blocks: 1360152 1360174
TreeView+ depends on / blocked
 
Reported: 2016-06-17 12:08 UTC by Ashish Pandey
Modified: 2017-03-27 18:28 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.9.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1347251
: 1360152 1360174 (view as bug list)
Environment:
Last Closed: 2017-03-27 18:28:06 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Ashish Pandey 2016-06-17 12:10:17 UTC
Steps to Reproduce:
=================
1.have a 3 node cluster with 4 bricks each and make sure you have setup all the configurations for doing an update(channel registration, storage subscription etc)
2.create an dist-ec volume such that each node hosts max 2 bricks of one dht-subvol
3.now start the volume
4.enable quotas
5. mount volume using fuse on a client
6. trigger IO by downloading kernel image and start untarring it
7. Now bring down all gluster processes including bricks, deamons and glusterd using pkill glusterfs,glusterfsd and service glusterd stop
8. issue an yum update to update to latest packages including gluster
===>IOs should be happening without any issue
9. Now once updating is successful, start glusterd==>this too will work and IOs will still be happening.
10. Now that node3 is updated sucessfully, we now move on to next node say node2
11. Now kill glusterfs,glusterfsd and glusterd on node2
====>You will now hit IO error or IO will stop abruptly.
To recheck just create a dir and try to copy the kernel.tar to this dir and it will fail as below:
[root@nchilaka-rhel6-fuseclient1-43-172 kern]# cp linux-4.6.2.tar.xz dir.6
cp: reading `linux-4.6.2.tar.xz': Input/output error

Comment 2 Ashish Pandey 2016-06-17 12:22:34 UTC
For glusterfs 3.7.5, feature/lock was not returning the lock count in xdata which ec requested.

To solve a hang issue we modified the code in such a way that if there is any request of inodelk count in xdata, feature/lock will return the same using xdata.

Now for glusterfs 3.7.9 ec is getting inodelk count in xdata from feature/lock.

This issue arises when we do a rolling update from 3.7.5 to 3.7.9.
For 4+2 volume running 3.7.5, if we update 2 nodes and after heal completion  kill 2 older nodes, this problem can be seen.
After update and killing of bricks, 2 nodes will return inodelk count while 2 older nodes will not contain it.

During dictionary match , ec_dict_compare, this will lead to mismatch of answers and the file operation on mount point will fail with IO error.

Comment 3 Vijay Bellur 2016-06-17 12:58:05 UTC
REVIEW: http://review.gluster.org/14761 (cluster/ec: Match xdata key if present in both dicts) posted (#1) for review on master by Ashish Pandey (aspandey)

Comment 4 Vijay Bellur 2016-07-21 11:18:03 UTC
REVIEW: http://review.gluster.org/14761 (cluster/ec: Match xdata key if present in both dicts) posted (#2) for review on master by Ashish Pandey (aspandey)

Comment 5 Vijay Bellur 2016-07-22 07:53:43 UTC
REVIEW: http://review.gluster.org/14761 (cluster/ec: Handle absence of keys in some callback dict) posted (#3) for review on master by Ashish Pandey (aspandey)

Comment 6 Vijay Bellur 2016-07-26 06:07:07 UTC
COMMIT: http://review.gluster.org/14761 committed in master by Pranith Kumar Karampuri (pkarampu) 
------
commit 558a45fa527b01ec81904150532a8b661c06ae8a
Author: Ashish Pandey <aspandey>
Date:   Fri Jun 17 17:52:56 2016 +0530

    cluster/ec: Handle absence of keys in some callback dict
    
    Problem: This issue arises when we do a rolling update
    from 3.7.5 to 3.7.9.
    For 4+2 volume running 3.7.5, if we update 2 nodes
    and after heal completion  kill 2 older nodes, this
    problem can be seen. After update and killing of
    bricks, 2 nodes will return inodelk count key in dict
    while other 2 nodes will not have inodelk count in dict.
    This is also true for get-link-count.
    During dictionary match , ec_dict_compare, this will
    lead to mismatch of answers and the file operation
    on mount point will fail with IO error.
    
    Solution:
    Don't match inode, entry and link count keys while
    comparing two dictionaries. However, while combining the
    data in ec_dict_combine, go through all the dictionaries
    and select the maximum values received in different dicts
    for these keys.
    
    Change-Id: I33546e3619fe8f909286ee48fb0df2009cd3d22f
    BUG: 1347686
    Signed-off-by: Ashish Pandey <aspandey>
    Reviewed-on: http://review.gluster.org/14761
    Reviewed-by: Xavier Hernandez <xhernandez>
    Smoke: Gluster Build System <jenkins.org>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>
    CentOS-regression: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>

Comment 7 Shyamsundar 2017-03-27 18:28:06 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.0, please open a new bug report.

glusterfs-3.9.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2016-November/029281.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.