Bug 1294062 - [georep+disperse]: Geo-Rep session went to faulty with errors "[Errno 5] Input/output error"
Summary: [georep+disperse]: Geo-Rep session went to faulty with errors "[Errno 5] Inpu...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: disperse
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: RHGS 3.1.3
Assignee: Kotresh HR
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On:
Blocks: 1296496 1299184 1313623
TreeView+ depends on / blocked
 
Reported: 2015-12-24 11:05 UTC by Rahul Hinduja
Modified: 2016-09-17 15:05 UTC (History)
11 users (show)

Fixed In Version: glusterfs-3.7.9-2
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1296496 (view as bug list)
Environment:
Last Closed: 2016-06-23 05:00:30 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1240 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 Update 3 2016-06-23 08:51:28 UTC

Description Rahul Hinduja 2015-12-24 11:05:35 UTC
Description of problem:
=======================

Georep session went to faulty with following errors in geo-rep logs:

[2015-12-24 10:57:45.463694] E [syncdutils(/rhs/brick2/ct-8):276:log_raise_exception] <top>: FAIL: 
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 165, in main
    main_i()
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 662, in main_i
    local.service_loop(*[r for r in [remote] if r])
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1439, in service_loop
    g3.crawlwrap(oneshot=True)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 586, in crawlwrap
    '.', '.'.join([str(self.uuid), str(gconf.slave_id)]))
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 323, in ff
    return f(*a)
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 489, in stime_mnt
    8)
  File "/usr/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 55, in lgetxattr
    return cls._query_xattr(path, siz, 'lgetxattr', attr)
  File "/usr/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 47, in _query_xattr
    cls.raise_oserr()
  File "/usr/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 37, in raise_oserr
    raise OSError(errn, os.strerror(errn))
OSError: [Errno 5] Input/output error


getfattr on slave mount logs: Input/Output error

[root@dhcp37-133 ~]# getfattr -d -m . -e hex /mnt/test/
getfattr: Removing leading '/' from absolute path names
# file: mnt/test/
security.selinux=0x73797374656d5f753a6f626a6563745f723a6675736566735f743a733000
/mnt/test/: trusted.glusterfs.1dd75524-8cc9-4d93-9b14-518021c8df3f.5fc0f095-23c4-4b96-9d94-69decd14f1d4.stime: Input/output error
trusted.glusterfs.1dd75524-8cc9-4d93-9b14-518021c8df3f.xtime=0x3000
trusted.tier.tier-dht=0x000000010000000000000000ffffffff
trusted.tier.tier-dht.commithash=0x3330313736383334313800

[root@dhcp37-133 ~]# 

[root@dhcp37-133 ~]# mount | grep test
10.70.37.165:/tiervolume on /mnt/test type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
[root@dhcp37-133 ~]# 


Client log snippet:
===================

# less /var/log/glusterfs/mnt-test.log 

[2015-12-24 10:28:50.791227] W [MSGID: 122002] [ec-common.c:71:ec_heal_report] 0-tiervolume-disperse-1: Heal failed [Input/output error]
The message "W [MSGID: 122002] [ec-common.c:71:ec_heal_report] 0-tiervolume-disperse-1: Heal failed [Input/output error]" repeated 6 times between [2015-12-24 10:28:50.791227] and [2015-12-24 10:28:51.062863]
[2015-12-24 10:39:42.715503] W [MSGID: 122056] [ec-combine.c:866:ec_combine_check] 0-tiervolume-disperse-1: Mismatching xdata in answers of 'LOOKUP'
[2015-12-24 10:39:42.718869] W [MSGID: 122053] [ec-common.c:116:ec_check_status] 0-tiervolume-disperse-1: Operation failed on some subvolumes (up=3F, mask=3F, remaining=0, good=3E, bad=1)
[2015-12-24 10:39:42.727887] W [MSGID: 122002] [ec-common.c:71:ec_heal_report] 0-tiervolume-disperse-1: Heal failed [Input/output error]
[2015-12-24 10:39:42.919641] N [MSGID: 122031] [ec-generic.c:1133:ec_combine_xattrop] 0-tiervolume-disperse-1: Mismatching dictionary in answers of 'GF_FOP_XATTROP'
[2015-12-24 10:39:42.919750] W [MSGID: 122040] [ec-common.c:907:ec_prepare_update_cbk] 0-tiervolume-disperse-1: Failed to get size and version [Input/output error]
[2015-12-24 10:39:42.926486] W [fuse-bridge.c:3355:fuse_xattr_cbk] 0-glusterfs-fuse: 15: GETXATTR(trusted.glusterfs.1dd75524-8cc9-4d93-9b14-518021c8df3f.5fc0f095-23c4-4b96-9d94-69decd14f1d4.stime) / => -1 (Input/output error)
[2015-12-24 10:39:42.925954] N [MSGID: 122031] [ec-generic.c:1133:ec_combine_xattrop] 0-tiervolume-disperse-1: Mismatching dictionary in answers of 'GF_FOP_XATTROP'
[2015-12-24 10:39:42.926445] W [MSGID: 122040] [ec-common.c:907:ec_prepare_update_cbk] 0-tiervolume-disperse-1: Failed to get size and version [Input/output error]
[2015-12-24 10:58:58.908160] W [MSGID: 122056] [ec-combine.c:866:ec_combine_check] 0-tiervolume-disperse-1: Mismatching xdata in answers of 'LOOKUP'
[2015-12-24 10:58:58.909422] W [MSGID: 122053] [ec-common.c:116:ec_check_status] 0-tiervolume-disperse-1: Operation failed on some subvolumes (up=3F, mask=3F, remaining=0, good=3E, bad=1)
[2015-12-24 10:58:58.918637] W [MSGID: 122002] [ec-common.c:71:ec_heal_report] 0-tiervolume-disperse-1: Heal failed [Input/output error]
[2015-12-24 10:58:58.922502] N [MSGID: 122031] [ec-generic.c:1133:ec_combine_xattrop] 0-tiervolume-disperse-1: Mismatching dictionary in answers of 'GF_FOP_XATTROP'
[2015-12-24 10:58:58.924043] W [MSGID: 122053] [ec-common.c:116:ec_check_status] 0-tiervolume-disperse-1: Operation failed on some subvolumes (up=3F, mask=3E, remaining=0, good=3E, bad=1)
The message "N [MSGID: 122031] [ec-generic.c:1133:ec_combine_xattrop] 0-tiervolume-disperse-1: Mismatching dictionary in answers of 'GF_FOP_XATTROP'" repeated 2 times between [2015-12-24 10:58:58.922502] and [2015-12-24 10:58:58.972485]
[2015-12-24 10:58:58.973055] W [MSGID: 122040] [ec-common.c:907:ec_prepare_update_cbk] 0-tiervolume-disperse-1: Failed to get size and version [Input/output error]
[2015-12-24 10:58:58.973187] W [fuse-bridge.c:3355:fuse_xattr_cbk] 0-glusterfs-fuse: 19: GETXATTR(trusted.glusterfs.1dd75524-8cc9-4d93-9b14-518021c8df3f.5fc0f095-23c4-4b96-9d94-69decd14f1d4.stime) / => -1 (Input/output error)
[2015-12-24 10:58:58.989738] N [MSGID: 122031] [ec-generic.c:1133:ec_combine_xattrop] 0-tiervolume-disperse-1: Mismatching dictionary in answers of 'GF_FOP_XATTROP'


Following are the ec.version for disperse subvolume 1:
======================================================

[root@dhcp37-165 ~]# getfattr -d -e hex -m . /rhs/brick2/ct-7/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick2/ct-7/
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.ec.version=0x00000000000000000000000000000011
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.1dd75524-8cc9-4d93-9b14-518021c8df3f.5fc0f095-23c4-4b96-9d94-69decd14f1d4.stime=0x567bc34f00000000
trusted.glusterfs.1dd75524-8cc9-4d93-9b14-518021c8df3f.xtime=0x567bc348000b7704
trusted.glusterfs.dht=0x00000001000000007ffa1668ffffffff
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000003
trusted.glusterfs.volume-id=0x1dd755248cc94d939b14518021c8df3f
trusted.tier.tier-dht=0x000000010000000000000000ba2cafe3
trusted.tier.tier-dht.commithash=0x3330313736373533323400

[root@dhcp37-165 ~]# 



[root@dhcp37-133 ~]# getfattr -d -e hex -m . /rhs/brick2/ct-8/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick2/ct-8/
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.ec.dirty=0x0000000000000000000000000000000e
trusted.ec.version=0x00000000000000000000000000000020
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.1dd75524-8cc9-4d93-9b14-518021c8df3f.5fc0f095-23c4-4b96-9d94-69decd14f1d4.stime=0x567bc34c00000000
trusted.glusterfs.1dd75524-8cc9-4d93-9b14-518021c8df3f.xtime=0x567bc348000b781e
trusted.glusterfs.dht=0x00000001000000007ffa1668ffffffff
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000003
trusted.glusterfs.volume-id=0x1dd755248cc94d939b14518021c8df3f
trusted.tier.tier-dht=0x000000010000000000000000ffffffff
trusted.tier.tier-dht.commithash=0x3330313736383334313800

[root@dhcp37-133 ~]#


[root@dhcp37-160 ~]# getfattr -d -e hex -m . /rhs/brick2/ct-9/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick2/ct-9/
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.ec.dirty=0x0000000000000000000000000000000e
trusted.ec.version=0x00000000000000000000000000000020
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.1dd75524-8cc9-4d93-9b14-518021c8df3f.5fc0f095-23c4-4b96-9d94-69decd14f1d4.stime=0x567bc34b00000000
trusted.glusterfs.1dd75524-8cc9-4d93-9b14-518021c8df3f.xtime=0x567bc348000b780d
trusted.glusterfs.dht=0x00000001000000007ffa1668ffffffff
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000003
trusted.glusterfs.volume-id=0x1dd755248cc94d939b14518021c8df3f
trusted.tier.tier-dht=0x000000010000000000000000ffffffff
trusted.tier.tier-dht.commithash=0x3330313736383334313800

[root@dhcp37-160 ~]# 



[root@dhcp37-158 ~]# getfattr -d -e hex -m . /rhs/brick2/ct-10/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick2/ct-10/
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.ec.dirty=0x0000000000000000000000000000000e
trusted.ec.version=0x00000000000000000000000000000020
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.1dd75524-8cc9-4d93-9b14-518021c8df3f.5fc0f095-23c4-4b96-9d94-69decd14f1d4.stime=0x567bc34f00000000
trusted.glusterfs.1dd75524-8cc9-4d93-9b14-518021c8df3f.xtime=0x567bc348000b7c51
trusted.glusterfs.dht=0x00000001000000007ffa1668ffffffff
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000003
trusted.glusterfs.volume-id=0x1dd755248cc94d939b14518021c8df3f
trusted.tier.tier-dht=0x000000010000000000000000ffffffff
trusted.tier.tier-dht.commithash=0x3330313736383334313800

[root@dhcp37-158 ~]# 


[root@dhcp37-110 ~]# getfattr -d -e hex -m . /rhs/brick2/ct-11/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick2/ct-11/
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.ec.dirty=0x0000000000000000000000000000000e
trusted.ec.version=0x00000000000000000000000000000020
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.1dd75524-8cc9-4d93-9b14-518021c8df3f.5fc0f095-23c4-4b96-9d94-69decd14f1d4.stime=0x567bc34b00000000
trusted.glusterfs.1dd75524-8cc9-4d93-9b14-518021c8df3f.xtime=0x567bc348000b7714
trusted.glusterfs.dht=0x00000001000000007ffa1668ffffffff
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000003
trusted.glusterfs.volume-id=0x1dd755248cc94d939b14518021c8df3f
trusted.tier.tier-dht=0x000000010000000000000000ffffffff
trusted.tier.tier-dht.commithash=0x3330313736383334313800

[root@dhcp37-110 ~]# 


[root@dhcp37-155 ~]# getfattr -d -e hex -m . /rhs/brick2/ct-12/
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick2/ct-12/
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.ec.dirty=0x0000000000000000000000000000000e
trusted.ec.version=0x00000000000000000000000000000020
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.1dd75524-8cc9-4d93-9b14-518021c8df3f.5fc0f095-23c4-4b96-9d94-69decd14f1d4.stime=0x567bc34f00000000
trusted.glusterfs.1dd75524-8cc9-4d93-9b14-518021c8df3f.xtime=0x567bc348000b7989
trusted.glusterfs.dht=0x00000001000000007ffa1668ffffffff
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000003
trusted.glusterfs.volume-id=0x1dd755248cc94d939b14518021c8df3f
trusted.tier.tier-dht=0x000000010000000000000000ffffffff
trusted.tier.tier-dht.commithash=0x3330313736383334313800

[root@dhcp37-155 ~]# 


[root@dhcp37-165 ~]# gluster volume info tiervolume 
 
Volume Name: tiervolume
Type: Distributed-Disperse
Volume ID: 1dd75524-8cc9-4d93-9b14-518021c8df3f
Status: Started
Number of Bricks: 2 x (4 + 2) = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.165:/rhs/brick1/ct-1
Brick2: 10.70.37.133:/rhs/brick1/ct-2
Brick3: 10.70.37.160:/rhs/brick1/ct-3
Brick4: 10.70.37.158:/rhs/brick1/ct-4
Brick5: 10.70.37.110:/rhs/brick1/ct-5
Brick6: 10.70.37.155:/rhs/brick1/ct-6
Brick7: 10.70.37.165:/rhs/brick2/ct-7
Brick8: 10.70.37.133:/rhs/brick2/ct-8
Brick9: 10.70.37.160:/rhs/brick2/ct-9
Brick10: 10.70.37.158:/rhs/brick2/ct-10
Brick11: 10.70.37.110:/rhs/brick2/ct-11
Brick12: 10.70.37.155:/rhs/brick2/ct-12
Options Reconfigured:
changelog.changelog: on
geo-replication.ignore-pid-check: on
geo-replication.indexing: on
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on
cluster.enable-shared-storage: enable
[root@dhcp37-165 ~]# 

<Note: It was a tier volume when the original issue has been seen. The output above is after detach>



Version-Release number of selected component (if applicable):
=============================================================

glusterfs-libs-3.7.5-13.el7rhgs.x86_64

How reproducible:
=================

1/1


Steps Carried:
==============
1. Create Master volume Tiered {HT: 3x2, CT: 2x(4+2)} 
2. Create Slave volume (4x2)
3. Create geo-rep session
4. Start geo-rep session

Actual results:
===============

All the passive bricks went to faulty


Expected results:
=================

Geo-Rep should be ACTIVE

Comment 4 Ashish Pandey 2015-12-30 07:27:45 UTC
Tried to reproduce it on one system but could not.
Changed xattr using setfattr with same values as provided in BZ logs but that also did not help.
I tried it on single system where all the bricks and master slave volumes were created on one system. Now, Investigating sosreport and also going through code to find RCA.

Comment 5 Rahul Hinduja 2016-01-04 06:54:31 UTC
Tried to reproduce this issue twice and everytime hitting the "OSError: [Errno 34] Numerical result out of range" . This issue is tracked via bz: 1285200, due to this bug the worker restarts automatically and becomes normal. 

Couldn't see the issue mentioned in this bug. Systems are available with me, if dev wants to change xattr using setfattr and try on multinode systems.

Comment 6 Ashish Pandey 2016-01-04 09:47:59 UTC
I could reproduce this bug on the setup given by Rahul.

Set stime xattr manually on all the bricks, each having different values.
Set ec.version xattr on 2 bricks of disperse-1 with different values. 

getxattr on master mount point. We can see IO Error for stime xattr

 [root@dhcp37-133 glusterfs]# getfattr -d -m . -e hex /mnt/test/
getfattr: Removing leading '/' from absolute path names
# file: mnt/test/
security.selinux=0x73797374656d5f753a6f626a6563745f723a6675736566735f743a733000
/mnt/test/: trusted.glusterfs.9ef3d4da-5615-4813-b57f-525c08f49773.35ea5520-1e95-4155-b5f0-e713f0aa8049.stime: Input/output error
trusted.glusterfs.9ef3d4da-5615-4813-b57f-525c08f49773.xtime=0x3000
trusted.tier.tier-dht.commithash=0x3330323438333832313200




====================================

[root@dhcp37-165 glusterfs]# getfattr -d -m. -e hex  /rhs/brick2/ct-7
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick2/ct-7
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.ec.version=0x00000000000000000000000000000000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.9ef3d4da-5615-4813-b57f-525c08f49773.35ea5520-1e95-4155-b5f0-e713f0aa8049.stime=0x56896c8000000000
trusted.glusterfs.9ef3d4da-5615-4813-b57f-525c08f49773.xtime=0x56896c94000235aa
trusted.glusterfs.dht=0x00000001000000007fffeb3fffffffff
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000003
trusted.glusterfs.volume-id=0x9ef3d4da56154813b57f525c08f49773
trusted.tier.tier-dht=0x000000010000000000000000ba2da373
trusted.tier.tier-dht.commithash=0x3330323438333832313200



[root@dhcp37-133 glusterfs]# getfattr -d -m. -e hex  /rhs/brick2/ct-8
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick2/ct-8
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.ec.version=0x00000000000000000000000000000011
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.9ef3d4da-5615-4813-b57f-525c08f49773.35ea5520-1e95-4155-b5f0-e713f0aa8049.stime=0x56896c9500000000
trusted.glusterfs.9ef3d4da-5615-4813-b57f-525c08f49773.xtime=0x56896c93000b2e48
trusted.glusterfs.dht=0x00000001000000007fffeb3fffffffff
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000003
trusted.glusterfs.volume-id=0x9ef3d4da56154813b57f525c08f49773
trusted.tier.tier-dht=0x000000010000000000000000ba2da373
trusted.tier.tier-dht.commithash=0x3330323438333832313200

[root@dhcp37-160 ~]# getfattr -d -m. -e hex  /rhs/brick2/ct-9
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick2/ct-9
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.ec.version=0x00000000000000000000000000000010
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.9ef3d4da-5615-4813-b57f-525c08f49773.35ea5520-1e95-4155-b5f0-e713f0aa8049.stime=0x56896c9500000000
trusted.glusterfs.9ef3d4da-5615-4813-b57f-525c08f49773.xtime=0x56896c92000e77b9
trusted.glusterfs.dht=0x00000001000000007fffeb3fffffffff
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000003
trusted.glusterfs.volume-id=0x9ef3d4da56154813b57f525c08f49773
trusted.tier.tier-dht=0x000000010000000000000000ba2da373
trusted.tier.tier-dht.commithash=0x3330323438333832313200



[root@dhcp37-158 ~]# getfattr -d -m. -e hex  /rhs/brick2/ct-10
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick2/ct-10
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.ec.version=0x00000000000000000000000000000011
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.9ef3d4da-5615-4813-b57f-525c08f49773.35ea5520-1e95-4155-b5f0-e713f0aa8049.stime=0x56896c9500000000
trusted.glusterfs.9ef3d4da-5615-4813-b57f-525c08f49773.xtime=0x56896c9300092dc4
trusted.glusterfs.dht=0x00000001000000007fffeb3fffffffff
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000003
trusted.glusterfs.volume-id=0x9ef3d4da56154813b57f525c08f49773
trusted.tier.tier-dht=0x000000010000000000000000ba2da373
trusted.tier.tier-dht.commithash=0x3330323438333832313200





[root@dhcp37-110 ~]# getfattr -d -e hex -m . /rhs/brick2/ct-11
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick2/ct-11
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.ec.version=0x00000000000000000000000000000011
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.9ef3d4da-5615-4813-b57f-525c08f49773.35ea5520-1e95-4155-b5f0-e713f0aa8049.stime=0x56896c9900000000
trusted.glusterfs.9ef3d4da-5615-4813-b57f-525c08f49773.xtime=0x56896c92000b0675
trusted.glusterfs.dht=0x00000001000000007fffeb3fffffffff
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000003
trusted.glusterfs.volume-id=0x9ef3d4da56154813b57f525c08f49773
trusted.tier.tier-dht=0x000000010000000000000000ba2da373
trusted.tier.tier-dht.commithash=0x3330323438333832313200


[root@dhcp37-155 ~]# getfattr -d -m. -e hex  /rhs/brick2/ct-12
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick2/ct-12
security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000
trusted.ec.version=0x00000000000000000000000000000011
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.9ef3d4da-5615-4813-b57f-525c08f49773.35ea5520-1e95-4155-b5f0-e713f0aa8049.stime=0x56896c9800000000
trusted.glusterfs.9ef3d4da-5615-4813-b57f-525c08f49773.xtime=0x56896c92000d0c28
trusted.glusterfs.dht=0x00000001000000007fffeb3fffffffff
trusted.glusterfs.quota.dirty=0x3000
trusted.glusterfs.quota.size.1=0x000000000000000000000000000000000000000000000003
trusted.glusterfs.volume-id=0x9ef3d4da56154813b57f525c08f49773
trusted.tier.tier-dht=0x000000010000000000000000ba2da373
trusted.tier.tier-dht.commithash=0x3330323438333832313200

Comment 8 Nagaprasad Sathyanarayana 2016-01-05 13:31:21 UTC
Hi Rahul, Rajesh,

This seems to be an existing issue. Can you please try this in 3.1 or 3.0 and let us know if the issue is reproducible?

Thanks

Comment 9 Ashish Pandey 2016-01-07 11:18:58 UTC
To get stime, we fetch stime xattr from all the bricks of the disperse sub volume and return the maximum value of it.

When getxattr call for stime goes to ec_gf_getxattr, ec_gf_getxattr sends it to EC_MINIMUM_ALL bricks i.e. all the bricks of disperse sub volume.

For all the operations, ec_complete expect that the number of healthy bricks should be greater than or equal to fop->minimum.

(In 4+2 configuration)
So if the "trusted.ec.version" xattr of 2 bricks are different than rest of the 4 bricks, getxattr on stime will fail in ec_complete as it would need 6 healthy bricks.

For stime, we only look for values on each brick and return the maximum value.
So, it is not require to have stime same on "ec->fragments" number of bricks.

Comment 10 Rahul Hinduja 2016-01-11 12:13:55 UTC
Tried to reproduce with build: glusterfs-3.7.5-15.el7rhgs.x86_64 by killing 2 bricks in a subvolume randomly. Geo-rep session never went to faulty and the arequal checksum matches between master and slave. 

But I see Input/Output error being continuously log on the master client logs as: 

[2016-01-10 23:19:40.384493] W [MSGID: 122053] [ec-common.c:116:ec_check_status] 0-master-disperse-0: Operation failed on some subvolumes (up=3F, mask=3F, remaining=0, good=3C, bad=3)
[2016-01-10 23:19:40.388716] W [MSGID: 122056] [ec-combine.c:866:ec_combine_check] 0-master-disperse-0: Mismatching xdata in answers of 'LOOKUP'
[2016-01-10 23:19:40.389042] W [MSGID: 122056] [ec-combine.c:866:ec_combine_check] 0-master-disperse-0: Mismatching xdata in answers of 'LOOKUP'
[2016-01-10 23:19:40.389883] W [MSGID: 122056] [ec-combine.c:866:ec_combine_check] 0-master-disperse-1: Mismatching xdata in answers of 'LOOKUP'
[2016-01-10 23:19:40.389924] W [MSGID: 122053] [ec-common.c:116:ec_check_status] 0-master-disperse-1: Operation failed on some subvolumes (up=3F, mask=3F, remaining=0, good=3E, bad=1)
[2016-01-10 23:19:40.390256] W [MSGID: 122056] [ec-combine.c:866:ec_combine_check] 0-master-disperse-0: Mismatching xdata in answers of 'LOOKUP'
[2016-01-10 23:19:40.390286] W [MSGID: 122053] [ec-common.c:116:ec_check_status] 0-master-disperse-0: Operation failed on some subvolumes (up=3F, mask=3F, remaining=0, good=3C, bad=3)
[2016-01-10 23:19:40.400409] I [MSGID: 122058] [ec-heal.c:2340:ec_heal_do] 0-master-disperse-1: /etc.5/rc.d/rc1.d: name heal successful on 3F
[2016-01-10 23:19:40.402915] W [MSGID: 122056] [ec-combine.c:866:ec_combine_check] 0-master-disperse-0: Mismatching xdata in answers of 'LOOKUP'
[2016-01-10 23:19:40.403059] W [MSGID: 122056] [ec-combine.c:866:ec_combine_check] 0-master-disperse-0: Mismatching xdata in answers of 'LOOKUP'
[2016-01-10 23:19:40.403166] W [MSGID: 122056] [ec-combine.c:866:ec_combine_check] 0-master-disperse-1: Mismatching xdata in answers of 'LOOKUP'
[2016-01-10 23:19:40.403385] W [MSGID: 122056] [ec-combine.c:866:ec_combine_check] 0-master-disperse-0: Mismatching xdata in answers of 'LOOKUP'
[2016-01-10 23:19:40.402917] I [MSGID: 122058] [ec-heal.c:2340:ec_heal_do] 0-master-disperse-1: /etc.5/rc.d: name heal successful on 3F
[2016-01-10 23:19:40.404068] W [MSGID: 122053] [ec-common.c:116:ec_check_status] 0-master-disperse-1: Operation failed on some subvolumes (up=3F, mask=3F, remaining=0, good=3E, bad=1)
[2016-01-10 23:19:40.404442] W [MSGID: 122056] [ec-combine.c:866:ec_combine_check] 0-master-disperse-0: Mismatching xdata in answers of 'LOOKUP'
[2016-01-10 23:19:40.404596] W [MSGID: 122053] [ec-common.c:116:ec_check_status] 0-master-disperse-0: Operation failed on some subvolumes (up=3F, mask=3F, remaining=0, good=3C, bad=3)
[2016-01-10 23:19:40.425316] W [MSGID: 122002] [ec-common.c:71:ec_heal_report] 0-master-disperse-0: Heal failed [Input/output error]
[2016-01-10 23:19:40.425684] W [MSGID: 122002] [ec-common.c:71:ec_heal_report] 0-master-disperse-1: Heal failed [Input/output error]
[2016-01-10 23:19:40.431070] W [MSGID: 122002] [ec-common.c:71:ec_heal_report] 0-master-disperse-1: Heal failed [Input/output error]
[2016-01-10 23:19:40.438030] I [MSGID: 122058] [ec-heal.c:2340:ec_heal_do] 0-master-disperse-0: /etc.5/rc.d: name heal successful on 3F
[2016-01-10 23:19:40.438102] I [MSGID: 122058] [ec-heal.c:2340:ec_heal_do] 0-master-disperse-0: /etc.5/rc.d/rc1.d: name heal successful on 3F
[2016-01-10 23:19:40.455124] I [MSGID: 122058] [ec-heal.c:2340:ec_heal_do] 0-master-disperse-1: /etc.5/rc.d/init.d: name heal successful on 3F
[2016-01-10 23:19:40.466115] W [MSGID: 122002] [ec-common.c:71:ec_heal_report] 0-master-disperse-1: Heal failed [Input/output error]


At this point in time, all the subvolumes are up. 

Heal info also shows 639 same entries for all bricks in the subvolume. Heal info split-brain only shows information about hot tier. 

[root@dhcp37-165 scripts]# gluster volume heal master info split-brain
Brick 10.70.37.158:/rhs/brick3/master_tier3
Number of entries in split-brain: 0

Brick 10.70.37.160:/rhs/brick3/master_tier2
Number of entries in split-brain: 0

Brick 10.70.37.133:/rhs/brick3/master_tier1
Number of entries in split-brain: 0

Brick 10.70.37.165:/rhs/brick3/master_tier0
Number of entries in split-brain: 0

[root@dhcp37-165 scripts]# 


Accessed files from master and they are accessible without any error. 

[root@dj slave]# ls /mnt/master/etc.5/selinux/targeted/modules/active/modules/inn.pp
/mnt/master/etc.5/selinux/targeted/modules/active/modules/inn.pp
[root@dj slave]# file /mnt/master/etc.5/selinux/targeted/modules/active/modules/inn.pp
/mnt/master/etc.5/selinux/targeted/modules/active/modules/inn.pp: bzip2 compressed data, block size = 900k
[root@dj slave]# ls /mnt/master/etc.5/openldap
certs  ldap.conf
[root@dj slave]# file /mnt/master/etc.5/openldap
/mnt/master/etc.5/openldap: directory
[root@dj slave]# ls /mnt/master/etc.5/rc.d/rc1.d/K90network
/mnt/master/etc.5/rc.d/rc1.d/K90network
[root@dj slave]# file /mnt/master/etc.5/rc.d/rc1.d/K90network
/mnt/master/etc.5/rc.d/rc1.d/K90network: symbolic link to `../init.d/network'
[root@dj slave]#

Comment 12 Ashish Pandey 2016-01-13 10:07:27 UTC
Heal Issue: 

[2016-01-10 23:19:40.466115] W [MSGID: 122002] [ec-common.c:71:ec_heal_report] 0-master-disperse-1: Heal failed [Input/output error

During metadata heal, EC tries to find out the source brick (healthy bricks) and sink bricks. For N+K configuration, at least N bricks MUST be healthy. To do so , it gets the xdata of the bricks and matches it. As xtime, could be different on all the N+K bricks, it does not find minimum N number of bricks with similar metadata version and xtime. That is why it returns IO error at this point of time and meatadata heal fails.

It can be avoid to return xtime and stime  to the user while doing getxattr as these are the xattr to implement functionality of geo-rep and nothing to do with user.

Discussed this with geo-rep and waiting for their input.

Comment 13 Rahul Hinduja 2016-02-11 07:32:45 UTC
Hitting this with 50% of success with build: glusterfs-cli-3.7.5-19.el7rhgs.x86_64 {Automation run}

[2016-02-11 07:19:25.382540] N [MSGID: 122031] [ec-generic.c:1133:ec_combine_xattrop] 0-master-disperse-1: Mismatching dictionary in answers of 'GF_FOP_XATTROP'
[2016-02-11 07:19:25.383961] W [MSGID: 122053] [ec-common.c:116:ec_check_status] 0-master-disperse-1: Operation failed on some subvolumes (up=3F, mask=3B, remaining=0, good=3B, bad=4)
[2016-02-11 07:19:25.410654] N [MSGID: 122031] [ec-generic.c:1133:ec_combine_xattrop] 0-master-disperse-1: Mismatching dictionary in answers of 'GF_FOP_XATTROP'
[2016-02-11 07:19:25.411134] W [MSGID: 122040] [ec-common.c:907:ec_prepare_update_cbk] 0-master-disperse-1: Failed to get size and version [Input/output error]
[2016-02-11 07:19:25.413701] N [MSGID: 122031] [ec-generic.c:1133:ec_combine_xattrop] 0-master-disperse-0: Mismatching dictionary in answers of 'GF_FOP_XATTROP'
[2016-02-11 07:19:25.414238] N [MSGID: 122031] [ec-generic.c:1133:ec_combine_xattrop] 0-master-disperse-0: Mismatching dictionary in answers of 'GF_FOP_XATTROP'
[2016-02-11 07:19:25.414244] W [MSGID: 122040] [ec-common.c:907:ec_prepare_update_cbk] 0-master-disperse-0: Failed to get size and version [Input/output error]
[2016-02-11 07:19:25.414298] W [fuse-bridge.c:3360:fuse_xattr_cbk] 0-glusterfs-fuse: 7: GETXATTR(trusted.glusterfs.11046b3a-1714-420b-9e8e-09bf3bde48c5.892b253b-c7ec-41ca-affb-6e2e5f197816.stime) / => -1 (Input/output error)
[2016-02-11 07:19:25.425215] I [fuse-bridge.c:4965:fuse_thread_proc] 0-fuse: unmounting /tmp/gsyncd-aux-mount-ldsaqD
[2016-02-11 07:19:25.426707] W [glusterfsd.c:1236:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dc5) [0x7f2b3dac8dc5] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f2b3f133905] -->/usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7f2b3f133789] ) 0-: received signum (15), shutting down
[2016-02-11 07:19:25.426748] I [fuse-bridge.c:5669:fini] 0-fuse: Unmounting '/tmp/gsyncd-aux-mount-ldsaqD'.
(END)



[2016-02-11 07:10:18.82303] E [syncdutils(/bricks/brick0/master_brick0):276:log_raise_exception] <top>: FAIL: 
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 165, in main
    main_i()
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 662, in main_i
    local.service_loop(*[r for r in [remote] if r])
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1439, in service_loop
    g3.crawlwrap(oneshot=True)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 586, in crawlwrap
    '.', '.'.join([str(self.uuid), str(gconf.slave_id)]))
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 323, in ff
    return f(*a)
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 489, in stime_mnt
    8)
  File "/usr/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 55, in lgetxattr
    return cls._query_xattr(path, siz, 'lgetxattr', attr)
  File "/usr/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 47, in _query_xattr
    cls.raise_oserr()
  File "/usr/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 37, in raise_oserr
    raise OSError(errn, os.strerror(errn))
OSError: [Errno 5] Input/output error
[2016-02-11 07:10:18.83926] I [syncdutils(/bricks/brick0/master_brick0):220:finalize] <top>: exiting.
[2016-02-11 07:10:18.92669] I [repce(agent):92:service_loop] RepceServer: terminating on reaching EOF.

Comment 14 Kotresh HR 2016-02-26 05:07:02 UTC
Upstream Patch Posted:
review.gluster.org/#/c/13242/

Comment 16 Aravinda VK 2016-03-23 06:21:12 UTC
Patch for this bug is available in rhgs-3.1.3 branch as part of rebase from upstream release-3.7.9.

Comment 18 Aravinda VK 2016-03-29 09:11:28 UTC
One more patch expected, not yet available in the build.
https://code.engineering.redhat.com/gerrit/#/c/70906

Comment 19 Rahul Hinduja 2016-05-31 12:02:15 UTC
Verified with build: glusterfs-3.7.9-6

1. Ran the geo-rep automation cases on EC and Tier volume(Cold being EC)
2. Manually changed the stime and version of the bricks and did client side healing and server side healing. 

In both the cases, didn't observer geo-rep worker crashing. Moving this bug to verified state.

Comment 23 errata-xmlrpc 2016-06-23 05:00:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1240


Note You need to log in before you can comment on or make changes to this bug.