Bug 1379935
Summary: | SMB[md-cache Private Build]:Error messages in brick logs related to upcall_cache_invalidate gf_uuid_is_null | |||
---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | surabhi <sbhaloth> | |
Component: | upcall | Assignee: | Poornima G <pgurusid> | |
Status: | CLOSED EOL | QA Contact: | ||
Severity: | medium | Docs Contact: | ||
Priority: | medium | |||
Version: | 3.8 | CC: | amukherj, bugs, tdesala | |
Target Milestone: | --- | Keywords: | Triaged | |
Target Release: | --- | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | If docs needed, set a value | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1389422 1392167 (view as bug list) | Environment: | ||
Last Closed: | 2017-11-07 10:36:36 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1389422, 1392167, 1394186, 1394187, 1394188 |
Description
surabhi
2016-09-28 07:34:44 UTC
Same issue is seen with private glusterfs build: 3.8.4-2.26.git0a405a4.el7rhgs.x86_64 This issue is specific to md-cache upcall, with the setup in same state disabled below md-cache options (see gluster v info ouput for more info) and ERROR messages were not seen in brick logs. Enabling the md-cache started spamming the brick logs with the error messages. Steps that were performed: 1. Create a distributed replica volume and started it. 2. Enabled md-cache supported options to the volume. Please see below gluster v info for the more details on the md-cache enabled options. 3. Mounted volume on multiple clients. Simultaneosuly, from one client touch 10000 files and from another client create 10000 hardlinks for the same file. 4. Add few bricks and start rebalance. 5. Once the rebalance is completed, remove all the files on the mount point using "rm -rf". 6. Check for brick logs, for the invalid argument messages. [root@dhcp42-185 ~]# gluster v status Status of volume: distrep Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.42.185:/bricks/brick0/b0 49152 0 Y 16587 Brick 10.70.43.152:/bricks/brick0/b0 49152 0 Y 19074 Brick 10.70.42.39:/bricks/brick0/b0 49152 0 Y 19263 Brick 10.70.42.84:/bricks/brick0/b0 49152 0 Y 19630 Brick 10.70.42.185:/bricks/brick1/b1 49153 0 Y 16607 Brick 10.70.43.152:/bricks/brick1/b1 49153 0 Y 19094 Brick 10.70.42.39:/bricks/brick1/b1 49153 0 Y 19283 Brick 10.70.42.84:/bricks/brick1/b1 49153 0 Y 19650 Brick 10.70.42.185:/bricks/brick2/b2 49154 0 Y 16627 Brick 10.70.43.152:/bricks/brick2/b2 49154 0 Y 19114 Brick 10.70.42.39:/bricks/brick2/b2 49154 0 Y 19303 Brick 10.70.42.84:/bricks/brick2/b2 49154 0 Y 19670 Brick 10.70.42.185:/bricks/brick3/b3 49155 0 Y 19472 Brick 10.70.43.152:/bricks/brick3/b3 49155 0 Y 19380 NFS Server on localhost N/A N/A N N/A Self-heal Daemon on localhost N/A N/A Y 19493 NFS Server on 10.70.42.39 N/A N/A N N/A Self-heal Daemon on 10.70.42.39 N/A N/A Y 19588 NFS Server on 10.70.42.84 N/A N/A N N/A Self-heal Daemon on 10.70.42.84 N/A N/A Y 19979 NFS Server on 10.70.43.152 N/A N/A N N/A Self-heal Daemon on 10.70.43.152 N/A N/A Y 19401 Task Status of Volume distrep ------------------------------------------------------------------------------ Task : Rebalance ID : 19b1127e-246e-4afd-b59b-9690b9569122 Status : completed [root@dhcp42-185 ~]# gluster v info Volume Name: distrep Type: Distributed-Replicate Volume ID: 4ad479e4-fa01-4d91-8743-4e1510ba2c13 Status: Started Snapshot Count: 0 Number of Bricks: 7 x 2 = 14 Transport-type: tcp Bricks: Brick1: 10.70.42.185:/bricks/brick0/b0 Brick2: 10.70.43.152:/bricks/brick0/b0 Brick3: 10.70.42.39:/bricks/brick0/b0 Brick4: 10.70.42.84:/bricks/brick0/b0 Brick5: 10.70.42.185:/bricks/brick1/b1 Brick6: 10.70.43.152:/bricks/brick1/b1 Brick7: 10.70.42.39:/bricks/brick1/b1 Brick8: 10.70.42.84:/bricks/brick1/b1 Brick9: 10.70.42.185:/bricks/brick2/b2 Brick10: 10.70.43.152:/bricks/brick2/b2 Brick11: 10.70.42.39:/bricks/brick2/b2 Brick12: 10.70.42.84:/bricks/brick2/b2 Brick13: 10.70.42.185:/bricks/brick3/b3 Brick14: 10.70.43.152:/bricks/brick3/b3 Options Reconfigured: performance.md-cache-timeout: 600 performance.cache-invalidation: on performance.stat-prefetch: on features.cache-invalidation-timeout: 600 features.cache-invalidation: on transport.address-family: inet performance.readdir-ahead: on (In reply to Prasad Desala from comment #1) > Same issue is seen with private glusterfs build: > 3.8.4-2.26.git0a405a4.el7rhgs.x86_64 Prasad - this is an upstream bug and no reference of downstream build should be mentioned here. If required, please file a downstream bug to track the issue. > > This issue is specific to md-cache upcall, with the setup in same state > disabled below md-cache options (see gluster v info ouput for more info) and > ERROR messages were not seen in brick logs. Enabling the md-cache started > spamming the brick logs with the error messages. > > Steps that were performed: > > 1. Create a distributed replica volume and started it. > 2. Enabled md-cache supported options to the volume. Please see below > gluster v info for the more details on the md-cache enabled options. > 3. Mounted volume on multiple clients. Simultaneosuly, from one client touch > 10000 files and from another client create 10000 hardlinks for the same file. > 4. Add few bricks and start rebalance. > 5. Once the rebalance is completed, remove all the files on the mount point > using "rm -rf". > 6. Check for brick logs, for the invalid argument messages. The fix for this would be to reduce the loglevel from Error to Debug, this bug is not introduced as a part of md-cache changes. REVIEW: http://review.gluster.org/15777 (upcall: Fix a log level) posted (#1) for review on master by Poornima G (pgurusid) This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release. |