Bug 1784211
Summary: | [GSS] - 'gluster volume set <VOLUME> disable.nfs' accidentally killed unexpected process, and forced a data brick offline. | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Kenichiro Kagoshima <kkagoshi> | |
Component: | glusterd | Assignee: | Srijan Sivakumar <ssivakum> | |
Status: | CLOSED ERRATA | QA Contact: | milind <mwaykole> | |
Severity: | medium | Docs Contact: | ||
Priority: | unspecified | |||
Version: | rhgs-3.4 | CC: | pasik, pprakash, puebele, rhs-bugs, rkothiya, sheggodu, storage-qa-internal | |
Target Milestone: | --- | Keywords: | ZStream | |
Target Release: | RHGS 3.5.z Batch Update 3 | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-6.0-38 | Doc Type: | No Doc Update | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1784375 1849533 (view as bug list) | Environment: | ||
Last Closed: | 2020-12-17 04:50:48 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1784375, 1849533 |
Description
Kenichiro Kagoshima
2019-12-16 23:53:07 UTC
Thanks for the detailed bug report. I will read the code and send out a fix if there is a bug. upstream patch: https://review.gluster.org/#/c/glusterfs/+/23890 (In reply to Sanju from comment #2) > upstream patch: https://review.gluster.org/#/c/glusterfs/+/23890 This is merged. What's the next step here? [node.example.com]# gluster v info Volume Name: vol Type: Replicate Volume ID: 75fea424-ac46-42a9-ba76-acd844742b8b Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.16.159.52:/bricks/brick0/vol Brick2: node3:/bricks/brick0/vol Brick3: node4:/bricks/brick0/vol Options Reconfigured: storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on performance.client-io-threads: off cluster.brick-multiplex: on [node.example.com]# gluster v status Status of volume: vol Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.16.159.52:/bricks/brick0/vol 49152 0 Y 163653 Brick node3:/bricks/brick0/vol 49152 0 Y 61197 Brick node4:/bricks/brick0/vol 49152 0 Y 143348 Self-heal Daemon on localhost N/A N/A Y 163670 Self-heal Daemon on node1 N/A N/A Y 44985 Self-heal Daemon on node3 N/A N/A Y 61214 Self-heal Daemon on node2 N/A N/A Y 88757 Self-heal Daemon on 10.16.159.124 N/A N/A Y 90124 Self-heal Daemon on node4 N/A N/A Y 143365 Task Status of Volume vol ------------------------------------------------------------------------------ There are no active volume tasks [node.example.com]# gluster volume set vol nfs.disable off volume set: success [node.example.com]# gluster v info Volume Name: vol Type: Replicate Volume ID: 75fea424-ac46-42a9-ba76-acd844742b8b Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.16.159.52:/bricks/brick0/vol Brick2: node3:/bricks/brick0/vol Brick3: node4:/bricks/brick0/vol Options Reconfigured: storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: off performance.client-io-threads: off cluster.brick-multiplex: on [node.example.com]# gluster v status Status of volume: vol Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.16.159.52:/bricks/brick0/vol 49152 0 Y 163653 Brick node3:/bricks/brick0/vol 49152 0 Y 61197 Brick node4:/bricks/brick0/vol 49152 0 Y 143348 NFS Server on localhost 2049 0 Y 163723 Self-heal Daemon on localhost N/A N/A Y 163670 NFS Server on 10.16.159.124 2049 0 Y 90152 Self-heal Daemon on 10.16.159.124 N/A N/A Y 90124 NFS Server on node1 2049 0 Y 45012 Self-heal Daemon on node1 N/A N/A Y 44985 NFS Server on node2 2049 0 Y 88784 Self-heal Daemon on node2 N/A N/A Y 88757 NFS Server on node3 2049 0 Y 61245 Self-heal Daemon on node3 N/A N/A Y 61214 NFS Server on node4 2049 0 Y 143396 Self-heal Daemon on node4 N/A N/A Y 143365 Task Status of Volume vol ------------------------------------------------------------------------------ There are no active volume tasks --------------------------------------------- Additional info : glusterfs-6.0-45.el8rhgs.x86_64 glusterfs-fuse-6.0-45.el8rhgs.x86_64 glusterfs-api-6.0-45.el8rhgs.x86_64 glusterfs-selinux-1.0-1.el8rhgs.noarch glusterfs-client-xlators-6.0-45.el8rhgs.x86_64 glusterfs-server-6.0-45.el8rhgs.x86_64 glusterfs-cli-6.0-45.el8rhgs.x86_64 glusterfs-libs-6.0-45.el8rhgs.x86_64 After performning the steps. As no brick is going offline after nfs.disable: off , Hence marking this bug as verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5603 |