Bug 1738524 - After node reboot gluster brick process of one volume is offline
Summary: After node reboot gluster brick process of one volume is offline
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.5
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Mohit Agrawal
QA Contact: Bala Konda Reddy M
URL:
Whiteboard:
Depends On:
Blocks: 1732703
TreeView+ depends on / blocked
 
Reported: 2019-08-07 11:25 UTC by Bala Konda Reddy M
Modified: 2019-10-25 09:18 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-08-28 03:04:08 UTC
Embargoed:


Attachments (Terms of Use)

Description Bala Konda Reddy M 2019-08-07 11:25:17 UTC
Description of problem:
On a three node cluster with 2000 volumes and continuos IO on 6 volumes.
Performing in-service upgrade from 6.0.7 to 6.0.11 build . First node upgraded successfully and heal is completed. Upgraded second node and performed node reboot. After reboot one brick is offline.


Version-Release number of selected component (if applicable):
glusterfs-6.0-11.el7rhgs.x86_64

How reproducible:
1/1

Steps to Reproduce:
1. On three node cluster with brick mux enabled, Created 2000 volumes(replica 3) and are in started state.
2. Mounted 6 volumes and continuous IO is being done on those volumes.
3. Perfromed in-service upgrade to latest build.
4. First node upgrade is successful and all processes are online and heal completed successfully
5. Second node upgraded successfully and performed node reboot after upgrade.

Actual results:
One brick process is offline.

Expected results:
All brick process should be online after node reboot.

Additional info:


Note You need to log in before you can comment on or make changes to this bug.