Bug 1220047 - Data Tiering:3.7.0:data loss:detach-tier not flushing data to cold-tier
Summary: Data Tiering:3.7.0:data loss:detach-tier not flushing data to cold-tier
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: 3.7.0
Hardware: Unspecified
OS: Linux
urgent
urgent
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On: 1205540 1229233
Blocks: qe_tracker_everglades 1219513 glusterfs-tiering-supportability 1222088 1227485 1229269 1259079 1260923 1273726 1274411
TreeView+ depends on / blocked
 
Reported: 2015-05-09 12:55 UTC by Mohammed Rafi KC
Modified: 2023-09-14 02:59 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.7.0
Doc Type: Bug Fix
Doc Text:
Clone Of: 1205540
Environment:
Last Closed: 2015-05-15 17:10:10 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Mohammed Rafi KC 2015-05-09 12:55:48 UTC
+++ This bug was initially created as a clone of Bug #1205540 +++

Description of problem:
=======================
In a tiered volume, when we detach a tier, the operation passes successfully, but doesnt flush data to cold tier.
This leads to data loss.


Version-Release number of selected component (if applicable):
============================================================
3.7 upstream nightlies build http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs/epel-6-x86_64/glusterfs-3.7dev-0.777.git2308c07.autobuild/


How reproducible:
=================
Easy to reproduce


Steps to Reproduce:
==================
1.create a gluster volume(i created a distribute type) and start the volume
2.attach a tier to the volume using attach-tier
3.now write some files to the volume. All files(if sufficient space available) will be written to the hot-tier
4. Now detach the tier using detach-tier command.


Actual results:
===============
When we detach the tier, the tier gets detached without flushing the data in hot tier to cold. Due to this there is data loss

Expected results:
================
Detach tier should succeed only after all data is flushed to cold tier.


Additional info(CLI logs):
===============
[root@rhs-client44 everglades]# gluster v info vol1
 
Volume Name: vol1
Type: Distribute
Volume ID: 3382e788-ee37-4d6c-b214-8469ca68e376
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: rhs-client44:/pavanbrick1/vol1/b1
Brick2: rhs-client38:/pavanbrick1/vol1/b1
Brick3: rhs-client37:/pavanbrick1/vol1/b1
[root@rhs-client44 everglades]# gluster v status vol1
Status of volume: vol1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick rhs-client44:/pavanbrick1/vol1/b1     49152     0          Y       29969
Brick rhs-client38:/pavanbrick1/vol1/b1     49152     0          Y       30514
Brick rhs-client37:/pavanbrick1/vol1/b1     49152     0          Y       29475
NFS Server on localhost                     2049      0          Y       29993
NFS Server on rhs-client38                  2049      0          Y       30538
NFS Server on rhs-client37                  2049      0          Y       29499
 
Task Status of Volume vol1
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@rhs-client44 everglades]# gluster v attach-tier vol1 rhs-client44:/pavanbrick2/vol1_hot/hb1 rhs-client37:/pavanbrick2/vol1_hot/hb1
volume add-brick: success
[root@rhs-client44 everglades]# gluster v info vol1
 
Volume Name: vol1
Type: Tier
Volume ID: 3382e788-ee37-4d6c-b214-8469ca68e376
Status: Started
Number of Bricks: 5 x 1 = 5
Transport-type: tcp
Bricks:
Brick1: rhs-client37:/pavanbrick2/vol1_hot/hb1
Brick2: rhs-client44:/pavanbrick2/vol1_hot/hb1
Brick3: rhs-client44:/pavanbrick1/vol1/b1
Brick4: rhs-client38:/pavanbrick1/vol1/b1
Brick5: rhs-client37:/pavanbrick1/vol1/b1



[root@rhs-client44 everglades]# gluster v detach-tier vol1
volume remove-brick unknown: success
[root@rhs-client44 everglades]# gluster v info vol1
 
Volume Name: vol1
Type: Distribute
Volume ID: 3382e788-ee37-4d6c-b214-8469ca68e376
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: rhs-client44:/pavanbrick1/vol1/b1
Brick2: rhs-client38:/pavanbrick1/vol1/b1
Brick3: rhs-client37:/pavanbrick1/vol1/b1

--- Additional comment from Anand Avati on 2015-04-01 18:56:08 EDT ---

REVIEW: http://review.gluster.org/10108 (glusterd: WIP support for tier volumes 'detach start' and 'detach commit') posted (#1) for review on master by Dan Lambright (dlambrig)

--- Additional comment from Anand Avati on 2015-04-07 07:14:48 EDT ---

REVIEW: http://review.gluster.org/10108 (glusterd: WIP support for tier volumes 'detach start' and 'detach commit') posted (#2) for review on master by Dan Lambright (dlambrig)

--- Additional comment from Anand Avati on 2015-04-09 06:08:37 EDT ---

REVIEW: http://review.gluster.org/10108 (glusterd: WIP support for tier volumes 'detach start' and 'detach commit') posted (#3) for review on master by Dan Lambright (dlambrig)

--- Additional comment from Anand Avati on 2015-04-14 00:12:50 EDT ---

REVIEW: http://review.gluster.org/10108 (glusterd: support for tier volumes 'detach start' and 'detach commit') posted (#4) for review on master by Dan Lambright (dlambrig)

--- Additional comment from Anand Avati on 2015-04-16 06:19:59 EDT ---

REVIEW: http://review.gluster.org/10108 (glusterd: support for tier volumes 'detach start' and 'detach commit') posted (#5) for review on master by Dan Lambright (dlambrig)

--- Additional comment from Anand Avati on 2015-04-18 07:59:25 EDT ---

REVIEW: http://review.gluster.org/10108 (glusterd: support for tier volumes 'detach start' and 'detach commit') posted (#6) for review on master by Dan Lambright (dlambrig)

--- Additional comment from Anand Avati on 2015-04-21 16:52:11 EDT ---

REVIEW: http://review.gluster.org/10108 (glusterd: support for tier volumes 'detach start' and 'detach commit') posted (#7) for review on master by Dan Lambright (dlambrig)

--- Additional comment from Anand Avati on 2015-04-22 06:20:24 EDT ---

REVIEW: http://review.gluster.org/10108 (glusterd: support for tier volumes 'detach start' and 'detach commit') posted (#8) for review on master by Dan Lambright (dlambrig)

--- Additional comment from Anand Avati on 2015-04-22 10:39:46 EDT ---

REVIEW: http://review.gluster.org/10108 (glusterd: support for tier volumes 'detach start' and 'detach commit') posted (#9) for review on master by Kaleb KEITHLEY (kkeithle)

--- Additional comment from Anand Avati on 2015-04-22 10:51:06 EDT ---

COMMIT: http://review.gluster.org/10108 committed in master by Kaleb KEITHLEY (kkeithle) 
------
commit 86b02afab780e559e82399b9e96381d8df594ed6
Author: Dan Lambright <dlambrig>
Date:   Mon Apr 13 02:42:12 2015 +0100

    glusterd: support for tier volumes 'detach start' and 'detach commit'
    
    These commands work in a manner analagous to rebalancing when removing a
    brick. The existing migration daemon detects "detach start" and switches
    to moving data off the hot tier. While in this state all lookups are
    directed to the cold tier.
    
    gluster v detach-tier <vol> start
    gluster v detach-tier <vol> commit
    
    The status and stop cli commands shall be submitted separately.
    
    Change-Id: I24fda5cc3ba74f5fb8aa9a3234ad51f18b80a8a0
    BUG: 1205540
    Signed-off-by: Dan Lambright <dlambrig>
    Signed-off-by: root <root>
    Signed-off-by: Dan Lambright <dlambrig>
    Reviewed-on: http://review.gluster.org/10108
    Reviewed-by: Kaleb KEITHLEY <kkeithle>
    Tested-by: NetBSD Build System

Comment 1 Niels de Vos 2015-05-15 10:46:59 UTC
Is this a duplicate of bug 1219513? If it is, close it like that please.

Comment 2 Niels de Vos 2015-05-15 17:10:10 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 3 Red Hat Bugzilla 2023-09-14 02:59:04 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.