Bug 1567129
Summary: | "remote operation failed" errors seen on fuse client | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> |
Component: | rpc | Assignee: | Raghavendra G <rgowdapp> |
Status: | CLOSED WONTFIX | QA Contact: | Sayalee <saraut> |
Severity: | low | Docs Contact: | |
Priority: | low | ||
Version: | rhgs-3.4 | CC: | amukherj, nchilaka, rhs-bugs, saraut, tdesala |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-09-23 06:27:10 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Nag Pavan Chilakam
2018-04-13 13:25:45 UTC
dht layout is as below of the newly created dir folder1 dht-subvol1 # file: gluster/brick1/zen/folder1 security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.ec.version=0x00000000000000000000000000000001 trusted.gfid=0xb71633380fc74dbfb8552708e1b6e40c trusted.glusterfs.dht=0x0000000000000000000000007ffffffe dht-subvol2 # file: gluster/brick2/zen/folder1 security.selinux=0x73797374656d5f753a6f626a6563745f723a676c7573746572645f627269636b5f743a733000 trusted.ec.version=0x00000000000000000000000000000001 trusted.gfid=0xb71633380fc74dbfb8552708e1b6e40c trusted.glusterfs.dht=0x00000000000000007fffffffffffffff trusted.glusterfs.dht.mds=0x00000000 Considering the bug mentioned in comment#1 is now in fixed state, should we retry the setup? We dont see any issue here. Continuing comment #8: if BZ is being planned for re-validation, please collect and attach logs to the BZ. If not, I'd like to see a reason for delegating this BZ to the RPC team/component. Since there are: * no connection failures * no RPC message drops * no call bails * no ping-timer expiry * no re-connection attempts reported, this does not seem like an RPC issue. Please clarify. Not going to fix. Original BZ was fixed and released. |