Bug 1588408
Summary: | Fops are sent to glusterd and uninitialized brick stack when client reconnects to brick | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Raghavendra G <rgowdapp> |
Component: | protocol | Assignee: | Raghavendra G <rgowdapp> |
Status: | CLOSED ERRATA | QA Contact: | Rajesh Madaka <rmadaka> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhgs-3.4 | CC: | amukherj, rgowdapp, rhs-bugs, rkavunga, rmadaka, sankarshan, storage-qa-internal, vdas |
Target Milestone: | --- | ||
Target Release: | RHGS 3.4.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.12.2-13 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2018-09-04 06:49:14 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1503137 |
Description
Raghavendra G
2018-06-07 09:07:56 UTC
(In reply to Raghavendra G from comment #0) > > 2. Fops can be sent to brick when brick stack is not initialized causing > crashes like bz 1503137. bz 1520374 and bz 1583937 As suggested by dev, i have followed the steps from # bz 1583937. After upgraded from RHGS-3.3.1(RHEL-7.4) to RHGS-3.4(RHEL-7.5), upgraded node bricks went to offline for most of the volumes. sosreport copied in below location: http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/rajesh/1588408/ (In reply to Rajesh Madaka from comment #8) > As suggested by dev, i have followed the steps from # bz 1583937. Did bricks crash? Are cores copied in sosreport? > > > After upgraded from RHGS-3.3.1(RHEL-7.4) to RHGS-3.4(RHEL-7.5), upgraded > node bricks went to offline for most of the volumes. > > sosreport copied in below location: > http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/rajesh/1588408/ No cores are generated, I don't think so its a brick crash, bricks didn't come to online. (In reply to Rajesh Madaka from comment #10) > No cores are generated, I don't think so its a brick crash, bricks didn't > come to online. Can you explain what do you mean by bricks didn't come online? How were you observing bricks - through gluster volume status, through client connecting to brick, or didn't see the brick process etc? I am observing brick status through gluster volume status.most of the bricks status showing N/A for ugraded node bricks. Based on the discussion with QE, moving this BZ to ON_QA again. Just to clarify why this BZ has been moved to ON_QA, there's absolutely no relation of brick not coming up with the fix which this bug brings in. Can you please provide steps to verify this bug? (In reply to Rajesh Madaka from comment #15) > Can you please provide steps to verify this bug? I think if you don't see a brick crash, the bug can be marked as verified. As we discussed on chat, clients are able to connect to bricks and mount is successful. Bricks not being shown online in gluster v status might be a different bug. I have verified this bug with below two scenarios. First scenario: I have followed steps mentioned in bz #1583937 Didn't find any brick crashes or mount point disconnections, but bricks went to offline,will be raising different bug for this. gluster-build version: glusterfs-fuse-3.12.2-16 Second scenario: -> Created 3 node cluster -> Created volume -> Mounted volume on client -> Then rebooted one of the gluster node. Didn't find any brick crashes or mount disconnections. Moving this bug to verified state. Gluster-build version: glusterfs-fuse-3.12.2-17 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2607 |