Bug 1683602
| Summary: | [Brick-mux] Observing multiple brick processes on node reboot with volume start | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Kshithij Iyer <kiyer> |
| Component: | glusterd | Assignee: | Srijan Sivakumar <ssivakum> |
| Status: | CLOSED ERRATA | QA Contact: | milind <mwaykole> |
| Severity: | high | Docs Contact: | |
| Priority: | medium | ||
| Version: | rhgs-3.4 | CC: | budic, kiyer, moagrawa, nchilaka, pasik, pprakash, puebele, rhs-bugs, rkothiya, sheggodu, storage-qa-internal |
| Target Milestone: | --- | Keywords: | ZStream |
| Target Release: | RHGS 3.5.z Batch Update 3 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-6.0-38 | Doc Type: | No Doc Update |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-12-17 04:50:16 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Kshithij Iyer
2019-02-27 10:22:37 UTC
I've encountered this on gluster 5.5 while upgrading servers. Multiple glusterfsd processes per volume spawned, disrupting healing and causing significant problems for the VMs using these volumes. Realized I'm probably seeing something different, albeit maybe with the same root cause, so opened https://bugzilla.redhat.com/show_bug.cgi?id=1698131. I have multiplexing disabled on my systems. Upstream patch : https://review.gluster.org/22635 (In reply to Atin Mukherjee from comment #10) > Upstream patch : https://review.gluster.org/22635 The patch is in abandoned state, moving it to assigned state. https://review.gluster.org/#/c/glusterfs/+/23724/ fixes the issue and the patch is already merged upstream. Steps: 1. Create a 3 node cluster. 2. Set cluster.brick-multiplex to enable. 3. Create 15 volumes of type replica 1x3. 4. Start all the volumes one by one. 5. While the volumes are starting reboot one node. ---------------------------------- brick-mux is enabled [root@rhel7-node1 ~]# pidof glusterfsd 4101 [root@rhel7-node2 ~]# pidof glusterfsd 2191 [root@rhel7-node3 ~]# pidof glusterfsd 28544 [node1-rhel7 ~]# gluster v get all all Option Value ------ ----- cluster.server-quorum-ratio 51 cluster.enable-shared-storage disable cluster.op-version 70000 cluster.max-op-version 70000 cluster.brick-multiplex on cluster.max-bricks-per-process 250 glusterd.vol_count_per_thread 100 cluster.daemon-log-level INFO [root@rhel7-node1 ~]# rpm -qa | grep -i glusterfs glusterfs-client-xlators-6.0-45.el7rhgs.x86_64 glusterfs-libs-6.0-45.el7rhgs.x86_64 glusterfs-events-6.0-45.el7rhgs.x86_64 glusterfs-6.0-45.el7rhgs.x86_64 glusterfs-cli-6.0-45.el7rhgs.x86_64 glusterfs-rdma-6.0-45.el7rhgs.x86_64 glusterfs-server-6.0-45.el7rhgs.x86_64 glusterfs-fuse-6.0-45.el7rhgs.x86_64 glusterfs-api-6.0-45.el7rhgs.x86_64 ============rhl8============ [root@rhel8-node1 ~]# pgrep glusterfsd 72721 [root@rhel8-node2 ~]# pgrep glusterfsd 1822 [root@rhel8-node3 ~]# pgrep glusterfsd 70504 [root@rhel8-node1 ~]# rpm -qa | grep -i glusterfs glusterfs-6.0-45.el8rhgs.x86_64 glusterfs-fuse-6.0-45.el8rhgs.x86_64 glusterfs-api-6.0-45.el8rhgs.x86_64 glusterfs-selinux-1.0-1.el8rhgs.noarch glusterfs-client-xlators-6.0-45.el8rhgs.x86_64 glusterfs-server-6.0-45.el8rhgs.x86_64 glusterfs-cli-6.0-45.el8rhgs.x86_64 glusterfs-libs-6.0-45.el8rhgs.x86_64 [root@rhel8-node1 ~]# gluster v get all all Option Value ------ ----- cluster.server-quorum-ratio 51% cluster.enable-shared-storage disable cluster.op-version 70000 cluster.max-op-version 70000 cluster.brick-multiplex on cluster.max-bricks-per-process 250 glusterd.vol_count_per_thread 100 cluster.daemon-log-level DEBUG As i see only one pid marking this bug as verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5603 |