Bug 958758
Summary: | offlined brick process on server1 automatically starts when other server3 in cluster is powered off | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Rahul Hinduja <rhinduja> | |
Component: | glusterd | Assignee: | Pranith Kumar K <pkarampu> | |
Status: | CLOSED ERRATA | QA Contact: | Rahul Hinduja <rhinduja> | |
Severity: | urgent | Docs Contact: | ||
Priority: | high | |||
Version: | 2.1 | CC: | amarts, rhs-bugs, sdharane, vbellur | |
Target Milestone: | --- | |||
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.4.0.4rhs-1 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 959986 (view as bug list) | Environment: | ||
Last Closed: | 2013-09-23 22:39:36 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 959986 |
Description
Rahul Hinduja
2013-05-02 11:05:47 UTC
Verified with the build: glusterfs-3.4.0.4rhs-1.el6rhs.x86_64 Powering off one server is not bringing the brick process online on other server in cluster. Works as expected. log snippet: ============ [root@rhs-client11 ~]# gluster volume status Status of volume: vol-dis-rep Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.36.35:/rhs/brick1/b1 N/A N 5293 Brick 10.70.36.36:/rhs/brick1/b2 49152 Y 5269 Brick 10.70.36.35:/rhs/brick1/b3 N/A N 5302 Brick 10.70.36.36:/rhs/brick1/b4 49153 Y 5278 Brick 10.70.36.35:/rhs/brick1/b5 N/A N 5311 Brick 10.70.36.36:/rhs/brick1/b6 49154 Y 5287 Brick 10.70.36.37:/rhs/brick1/b7 49152 Y 5271 Brick 10.70.36.38:/rhs/brick1/b8 49152 Y 5269 Brick 10.70.36.37:/rhs/brick1/b9 49153 Y 5280 Brick 10.70.36.38:/rhs/brick1/b10 49153 Y 5278 Brick 10.70.36.37:/rhs/brick1/b11 49154 Y 5289 Brick 10.70.36.38:/rhs/brick1/b12 49154 Y 5287 NFS Server on localhost 2049 Y 5323 Self-heal Daemon on localhost N/A Y 5327 NFS Server on c6b5d4e9-3782-457c-8542-f32b0941ed05 2049 Y 5299 Self-heal Daemon on c6b5d4e9-3782-457c-8542-f32b0941ed0 5 N/A Y 5303 NFS Server on f9cc4b9c-97e1-4f65-9657-3b050d45296e 2049 Y 5299 Self-heal Daemon on f9cc4b9c-97e1-4f65-9657-3b050d45296 e N/A Y 5303 NFS Server on 6962d204-37c8-436b-8ea6-a9698be40ec6 2049 Y 5301 Self-heal Daemon on 6962d204-37c8-436b-8ea6-a9698be40ec 6 N/A Y 5305 There are no active volume tasks [root@rhs-client11 ~]# [root@rhs-client11 ~]# [root@rhs-client11 ~]# [root@rhs-client11 ~]# gluster volume status Status of volume: vol-dis-rep Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.36.35:/rhs/brick1/b1 N/A N 5293 Brick 10.70.36.36:/rhs/brick1/b2 49152 Y 5269 Brick 10.70.36.35:/rhs/brick1/b3 N/A N 5302 Brick 10.70.36.36:/rhs/brick1/b4 49153 Y 5278 Brick 10.70.36.35:/rhs/brick1/b5 N/A N 5311 Brick 10.70.36.36:/rhs/brick1/b6 49154 Y 5287 Brick 10.70.36.38:/rhs/brick1/b8 49152 Y 5269 Brick 10.70.36.38:/rhs/brick1/b10 49153 Y 5278 Brick 10.70.36.38:/rhs/brick1/b12 49154 Y 5287 NFS Server on localhost 2049 Y 5323 Self-heal Daemon on localhost N/A Y 5327 NFS Server on c6b5d4e9-3782-457c-8542-f32b0941ed05 2049 Y 5299 Self-heal Daemon on c6b5d4e9-3782-457c-8542-f32b0941ed0 5 N/A Y 5303 NFS Server on f9cc4b9c-97e1-4f65-9657-3b050d45296e 2049 Y 5299 Self-heal Daemon on f9cc4b9c-97e1-4f65-9657-3b050d45296 e N/A Y 5303 There are no active volume tasks [root@rhs-client11 ~]# Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html |