Bug 1385605
Summary: | fuse mount point not accessible | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Karan Sandha <ksandha> | |
Component: | rpc | Assignee: | Raghavendra Talur <rtalur> | |
Status: | CLOSED ERRATA | QA Contact: | Karan Sandha <ksandha> | |
Severity: | urgent | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.2 | CC: | aloganat, amukherj, asrivast, bkunal, ccalhoun, hamiller, ksandha, nchilaka, olim, omasek, pgurusid, pkarampu, rabhat, rcyriac, rgowdapp, rhinduja, rhs-bugs, rjoseph, rnalakka, rtalur, sanandpa, storage-qa-internal | |
Target Milestone: | --- | |||
Target Release: | RHGS 3.2.0 | |||
Hardware: | All | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.8.4-7 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1386626 (view as bug list) | Environment: | ||
Last Closed: | 2017-03-23 06:11:05 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1351528, 1386626, 1388323, 1392906, 1397267, 1398930, 1401534, 1408949, 1474007 |
Description
Karan Sandha
2016-10-17 12:06:52 UTC
Karan, Can you attach brick and client log files? regards, Raghavendra Also, has this test case been tried on 3.2 build without md-cache options? Poornima, yes i tried without MDCACHE build but i wasn't able to hit it. Thanks & regards Karan Sandha *** Bug 1388414 has been marked as a duplicate of this bug. *** I hit this case, in my systemic testing, where the replica pair has one brick down. However the client sees that both the bricks are down inspite of one being up. Hence if we try to cat a file sitting on the brick, we get transportendpoint error and if we try to write to a file on this brick we get EIO version:3.8.4-5 sosreport of client is availble at [qe@rhsqe-repo nchilaka]$ pwd /var/www/html/sosreports/nchilaka [qe@rhsqe-repo nchilaka]$ /var/www/html/sosreports/nchilaka/bug.1385605 [root@dhcp35-191 ~]# gluster v info gl Volume Name: sysvol Type: Distributed-Replicate Volume ID: b1ef4d84-0614-4d5d-9e2e-b19183996e43 Status: Started Snapshot Count: 0 Number of Bricks: 4 x 2 = 8 Transport-type: tcp Bricks: Brick1: 10.70.35.191:/rhs/brick1/sysvol Brick2: 10.70.37.108:/rhs/brick1/sysvol Brick3: 10.70.35.3:/rhs/brick1/sysvol Brick4: 10.70.37.66:/rhs/brick1/sysvol Brick5: 10.70.35.191:/rhs/brick2/sysvol Brick6: 10.70.37.108:/rhs/brick2/sysvol Brick7: 10.70.35.3:/rhs/brick2/sysvol Brick8: 10.70.37.66:/rhs/brick2/sysvol Options Reconfigured: diagnostics.count-fop-hits: on diagnostics.latency-measurement: on performance.stat-prefetch: on performance.cache-invalidation: on cluster.shd-max-threads: 10 features.cache-invalidation-timeout: 400 features.cache-invalidation: on performance.md-cache-timeout: 300 features.uss: on features.quota-deem-statfs: on features.inode-quota: on features.quota: on transport.address-family: inet performance.readdir-ahead: on nfs.disable: on [root@dhcp35-191 ~]# gluster v status Status of volume: sysvol Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.35.191:/rhs/brick1/sysvol N/A N/A N N/A Brick 10.70.37.108:/rhs/brick1/sysvol 49152 0 Y 27848 Brick 10.70.35.3:/rhs/brick1/sysvol N/A N/A N N/A Brick 10.70.37.66:/rhs/brick1/sysvol 49152 0 Y 28853 Brick 10.70.35.191:/rhs/brick2/sysvol 49153 0 Y 18344 Brick 10.70.37.108:/rhs/brick2/sysvol N/A N/A N N/A Brick 10.70.35.3:/rhs/brick2/sysvol 49153 0 Y 11727 Brick 10.70.37.66:/rhs/brick2/sysvol N/A N/A N N/A Snapshot Daemon on localhost 49154 0 Y 18461 Self-heal Daemon on localhost N/A N/A Y 18364 Quota Daemon on localhost N/A N/A Y 18410 Snapshot Daemon on 10.70.35.3 49154 0 Y 11826 Self-heal Daemon on 10.70.35.3 N/A N/A Y 11747 Quota Daemon on 10.70.35.3 N/A N/A Y 11779 Snapshot Daemon on 10.70.37.66 49154 0 Y 28970 Self-heal Daemon on 10.70.37.66 N/A N/A Y 28892 Quota Daemon on 10.70.37.66 N/A N/A Y 28923 Snapshot Daemon on 10.70.37.108 49154 0 Y 27965 Self-heal Daemon on 10.70.37.108 N/A N/A Y 27887 Quota Daemon on 10.70.37.108 N/A N/A Y 27918 Task Status of Volume sysvol ------------------------------------------------------------------------------ There are no active volume tasks [root@dhcp35-191 ~]# *** Bug 1392906 has been marked as a duplicate of this bug. *** Patch posted upstream at http://review.gluster.org/#/c/15916 Upstream master : http://review.gluster.org/15916 Upstream release-3.8 : http://review.gluster.org/16025 Upstream release-3.9 : http://review.gluster.org/16026 Downstream : https://code.engineering.redhat.com/gerrit/92095 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html Thanks Nag for the update. @Rejy : do we need hotfix flag set on this bug? *** Bug 1429145 has been marked as a duplicate of this bug. *** |