Bug 1466608
Summary: | multiple brick processes seen on gluster(fs)d restart in brick multiplexing | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Atin Mukherjee <amukherj> |
Component: | glusterd | Assignee: | Atin Mukherjee <amukherj> |
Status: | CLOSED ERRATA | QA Contact: | Nag Pavan Chilakam <nchilaka> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhgs-3.3 | CC: | amukherj, bugs, nchilaka, rhinduja, rhs-bugs, storage-qa-internal, vbellur |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | RHGS 3.3.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | brick-multiplexing | ||
Fixed In Version: | glusterfs-3.8.4-32 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1465559 | Environment: | |
Last Closed: | 2017-09-21 05:02:13 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1465559 | ||
Bug Blocks: | 1417151 |
Description
Atin Mukherjee
2017-06-30 04:22:43 UTC
upstream patch : https://review.gluster.org/17640 downstream patch : https://code.engineering.redhat.com/gerrit/#/c/110561 [root@dhcp35-45 ~]# ps -ef|grep glusterfsd root 28114 1 5 18:11 ? 00:00:01 /usr/sbin/glusterfsd -s 10.70.35.45 --volfile-id vol_1.10.70.35.45.rhs-brick1-vol_1 -p /var/lib/glusterd/vols/vol_1/run/10.70.35.45-rhs-brick1-vol_1.pid -S /var/run/gluster/f3678cbb26724c43ffe643412d02da45.socket --brick-name /rhs/brick1/vol_1 -l /var/log/glusterfs/bricks/rhs-brick1-vol_1.log --xlator-option *-posix.glusterd-uuid=0205c280-0aab-4e0b-ab74-313a58795083 --brick-port 49153 --xlator-option vol_1-server.listen-port=49153 root 28158 1 0 18:11 ? 00:00:00 /usr/sbin/glusterfsd -s 10.70.35.45 --volfile-id vol_10.10.70.35.45.rhs-brick10-vol_10 -p /var/lib/glusterd/vols/vol_10/run/10.70.35.45-rhs-brick10-vol_10.pid -S /var/run/gluster/10bd4a1b912fe38eb41bfa64aff017c9.socket --brick-name /rhs/brick10/vol_10 -l /var/log/glusterfs/bricks/rhs-brick10-vol_10.log --xlator-option *-posix.glusterd-uuid=0205c280-0aab-4e0b-ab74-313a58795083 --brick-port 49154 --xlator-option vol_10-server.listen-port=49154 root 29006 13218 0 18:11 pts/1 00:00:00 grep --color=auto glusterfsd If the above problem is not the same, kindly suggest me another way to verify this bug on_qa validation: I am not seeing the problem of bricks not getting connected to the socket file(ie all bricks show online in volume status and i am able to do IOs to some random volumes, that means vol status doesn't show the bricks as N/A , which was the problem as discussed with Dev) I have run volume start stop in loop for about 50 times and didn't notice the problem Based on this , I am moving to verified. on_qa validation: I am not seeing the problem of bricks not getting connected to the socket file(ie all bricks show online in volume status and i am able to do IOs to some random volumes, that means vol status doesn't show the bricks as N/A , which was the problem as discussed with Dev) I have run volume start stop in loop for about 50 times and didn't notice the problem Based on this , I am moving to verified. Test version:3.8.4-34 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774 |