Bug 1380710
Summary: | invalid argument warning messages seen in fuse client logs 2016-09-30 06:34:58.938667] W [dict.c:418ict_set] (-->/usr/lib64/glusterfs/3.8.4/xlator/cluster/replicate.so(+0x58722) 0-dict: !this || !value for key=link-count [Invalid argument] | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> | |
Component: | replicate | Assignee: | Pranith Kumar K <pkarampu> | |
Status: | CLOSED ERRATA | QA Contact: | Nag Pavan Chilakam <nchilaka> | |
Severity: | high | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.2 | CC: | asrivast, nbalacha, rhinduja, rhs-bugs, storage-qa-internal, tdesala | |
Target Milestone: | --- | |||
Target Release: | RHGS 3.2.0 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.8.4-3 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1385104 (view as bug list) | Environment: | ||
Last Closed: | 2017-03-23 06:07:09 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1351528, 1385104, 1385236, 1385442 |
Description
Nag Pavan Chilakam
2016-09-30 11:58:08 UTC
Steps to reproduce this: 1. Create a 2x2 volume. 2. Fuse mount the volume and create dir1 3. Unmount volume 4. Delete dir1 manually on both bricks of any one replica set. 5. Mount the volume and do a lookup. DHT should see that the directory is missing and trigger a heal, causing this message to be logged. Glusterfs version: 3.8.4-2.el7rhgs.x86_64 Seeing similar warning messages in rebalance logs as well during rebalance. [2016-10-06 10:09:11.181450] W [dict.c:418:dict_set] (-->/usr/lib64/glusterfs/3.8.4/xlator/cluster/replicate.so(+0x4b320) [0x7efdb3b7d320] -->/lib64/libglusterfs.so.0(dict_set_str+0x2c) [0x7efdc5bce32c] -->/lib64/libglusterfs.so.0(dict_set+0xe6) [0x7efdc5bcc1e6] ) 0-dict: !this || !value for key=link-count [Invalid argument] [2016-10-06 10:09:11.184983] I [dht-rebalance.c:2902:gf_defrag_process_dir] 0-distrep-dht: Migration operation on dir /manual/sticky/d3263 took 0.08 secs [2016-10-06 10:09:11.191802] W [dict.c:418:dict_set] (-->/usr/lib64/glusterfs/3.8.4/xlator/cluster/replicate.so(+0x4b320) [0x7efdb3b7d320] -->/lib64/libglusterfs.so.0(dict_set_str+0x2c) [0x7efdc5bce32c] -->/lib64/libglusterfs.so.0(dict_set+0xe6) [0x7efdc5bcc1e6] ) 0-dict: !this || !value for key=link-count [Invalid argument] Updated this BZ as the warning messages observed in both fuse client and rebalance logs looks similar. If not, please let me know I will open a new BZ for the warning messages seen in rebalance logs. Steps that were performed: ========================== 1) Create a distributed replica volume and start it. 2) FUSE mount the volume and create files and directories. 3) Add few bricks to the volume. 4) Trigger rebalance. 5) monitor rebalance logs for the above warning messages... /var/log/glusterfs/<volname-rebalance.log> These are two separate test cases that trigger the same condition - healing of directories that are missing on some bricks. QE needs to decide whether the same BZ can be used to verify both scenarios. QATP: ===== Have rerun the case with fixed in build and didn't see any the warnings in all the below cases Hence moving to verified: TC#1: ==== 1. create same directory structure from two different clients Result:not seeing the warning TC#2: ==== 1) Create a distributed replica volume and start it. 2) FUSE mount the volume and create files and directories. 3) Add few bricks to the volume. 4) Trigger rebalance. 5) monitor rebalance logs for the above warning messages... /var/log/glusterfs/<volname-rebalance.log> Not seeing the warnings anymore TC#3: ==== 1. Create a 2x2 volume. 2. Fuse mount the volume and create dir1 3. Unmount volume 4. Delete dir1 manually on both bricks of any one replica set. 5. Mount the volume and do a lookup. DHT should see that the directory is missing and trigger a heal, causing this message to be logged. Not seeing warnings anymore Hence moving to verified [root@dhcp35-86 glusterfs]# rpm -qa|grep gluster glusterfs-3.8.4-3.el7rhgs.x86_64 glusterfs-server-3.8.4-3.el7rhgs.x86_64 glusterfs-fuse-3.8.4-3.el7rhgs.x86_64 glusterfs-libs-3.8.4-3.el7rhgs.x86_64 glusterfs-api-3.8.4-3.el7rhgs.x86_64 glusterfs-cli-3.8.4-3.el7rhgs.x86_64 glusterfs-client-xlators-3.8.4-3.el7rhgs.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html |