Bug 1722805 - Healing not proceeding during in-service upgrade on a disperse volume
Summary: Healing not proceeding during in-service upgrade on a disperse volume
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: ctime
Version: 6
Hardware: All
OS: Linux
high
high
Target Milestone: ---
Assignee: Kotresh HR
QA Contact:
URL:
Whiteboard:
Depends On: 1713664 1720201
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-21 11:16 UTC by Kotresh HR
Modified: 2019-07-02 07:41 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1720201
Environment:
Last Closed: 2019-07-02 07:41:06 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 22922 0 None Merged posix/ctime: Fix ctime upgrade issue 2019-07-02 07:41:05 UTC

Description Kotresh HR 2019-06-21 11:16:27 UTC
+++ This bug was initially created as a clone of Bug #1720201 +++

Description of problem:
=======================
Was doing an inservice upgrade from 5.x to 6.x on a 6 node setup 
With a distributed-dispersed volume and brickmux enabled setup


Version-Release number of selected component (if applicable):
=============================================================
2 nodes still on 5.x
4 nodes on 6.x


How reproducible:
================
1/1

Steps to Reproduce:
==================
1.Create a distributed-dispersed volume with brick mux enabled on a 3.4.4 setup
2.Mount the volume and start the IO's
3.Upgraded 2 nodes at a time and wait for healing to complete -- This completed successfully
4.Upgrade the next 2 nodes and start healing
5.Healing is not progressing from the past 5 hours (Has 150 files in heal info from then)


Actual results:
===============
Healing is not completing


Expected results:
================
Healing should complete successfully

--- Additional comment from Kotresh HR on 2019-06-13 11:33:17 UTC ---

Description of problem:
=======================
Was doing an inservice upgrade from 5.x to 6.x on a 6 node setup 
With a distributed-dispersed volume and brickmux enabled setup


Version-Release number of selected component (if applicable):
=============================================================
2 nodes still on 5.x
4 nodes on 6.x


How reproducible:
================
1/1

Steps to Reproduce:
==================
1.Create a distributed-dispersed volume with brick mux enabled on a 5.x setup
2.Mount the volume and start the IO's
3.Upgraded 2 nodes at a time and wait for healing to complete -- This completed successfully
4.Upgrade the next 2 nodes and start healing
5.Healing is not progressing from the past 5 hours (Has 150 files in heal info from then)


Actual results:
===============
Healing is not completing


Expected results:
================
Healing should complete successfully

--- Additional comment from Worker Ant on 2019-06-13 11:37:59 UTC ---

REVIEW: https://review.gluster.org/22858 (posix/ctime: Fix ctime upgrade issue) posted (#1) for review on master by Kotresh HR

--- Additional comment from Worker Ant on 2019-06-21 11:10:16 UTC ---

REVIEW: https://review.gluster.org/22858 (posix/ctime: Fix ctime upgrade issue) merged (#5) on master by Xavi Hernandez

Comment 1 Worker Ant 2019-06-21 11:37:18 UTC
REVIEW: https://review.gluster.org/22922 (posix/ctime: Fix ctime upgrade issue) posted (#1) for review on release-6 by Kotresh HR

Comment 2 Worker Ant 2019-07-02 07:41:06 UTC
REVIEW: https://review.gluster.org/22922 (posix/ctime: Fix ctime upgrade issue) merged (#3) on release-6 by hari gowtham


Note You need to log in before you can comment on or make changes to this bug.