Bug 1624698
Summary: | [Tracking BZ#1632719] With only 1 node down, multipath -ll shows multiple paths in "failed" state | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Neha Berry <nberry> | |
Component: | gluster-block | Assignee: | Prasanna Kumar Kalever <prasanna.kalever> | |
Status: | CLOSED ERRATA | QA Contact: | Neha Berry <nberry> | |
Severity: | high | Docs Contact: | ||
Priority: | medium | |||
Version: | cns-3.10 | CC: | akrishna, amukherj, atumball, bgoyal, hchiramm, jahernan, kramdoss, madam, nberry, pkarampu, pprakash, prasanna.kalever, rcyriac, rhs-bugs, rtalur, sankarshan, vbellur, xiubli | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | OCS 3.11.1 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.12.2-20 | Doc Type: | Bug Fix | |
Doc Text: |
Previously, after mounting the block volume if any brick of the block hosting volume goes down, then all multiple paths to the block volume used to enter fail state. This happens because when any given single brick is down, the backend glusterfs volume (BHV) response to IO requests takes too high (~14 mins), while the normal expected response time is 42 seconds. Thus, all applications utilizing this block volume would encounter Input-Output errors. With this fix, glusterfs block hosting volume's server.tcp-user-timeout is set to 42 sec by defaul
|
Story Points: | --- | |
Clone Of: | ||||
: | 1632719 (view as bug list) | Environment: |
Flags: devel_ack? → devel_ack+
|
|
Last Closed: | 2019-02-07 03:38:29 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1623874 | |||
Bug Blocks: | 1641915, 1644154 |
Description
Neha Berry
2018-09-03 06:21:06 UTC
> Prasanna, lets give Devel ack for this bug considering we will have RHGS fix available with OCS 3.11 release. If it didnt make it or the release timeline is not matching we will take back the acks.
Humble, at this point, RHGS would include this fix, but the release date is of the concern. Can't we handle it at higher level till we have RHGS version? that ways, we will have smooth dependency chain in the release?
Have updated the doc text. Kindly verify for doc text accuracy. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0285 |