Bug 1229250 - Data Tiering:Old copy of file still remaining on EC(disperse) layer, when edited after attaching tier(new copy is moved to hot tier)
Summary: Data Tiering:Old copy of file still remaining on EC(disperse) layer, when edi...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: tier
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On: 1212037
Blocks: qe_tracker_everglades 1202842
TreeView+ depends on / blocked
 
Reported: 2015-06-08 10:31 UTC by Nag Pavan Chilakam
Modified: 2016-09-17 15:37 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1212037
Environment:
Last Closed: 2015-10-30 17:39:58 UTC
Target Upstream Version:


Attachments (Terms of Use)
cli logs for failed_qa (25.11 KB, text/plain)
2015-07-04 12:07 UTC, Nag Pavan Chilakam
no flags Details

Description Nag Pavan Chilakam 2015-06-08 10:31:23 UTC
+++ This bug was initially created as a clone of Bug #1212037 +++

Description of problem:
======================
When we attach a tier to a EC volume which has files already, then on trying to edit those files using vim, the file with new contents is getting saved to the hot tier, but the old tier still has the old version of file but renamed to <filename>~
Eg:
If a file f2 was existing on ec volume and after attaching a tier, 
This means it will consume lot of disk space if they are huge files


Version-Release number of selected component (if applicable):
============================================================
[root@vertigo ~]# gluster --version
glusterfs 3.7dev built on Apr 13 2015 07:14:27
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@vertigo ~]# rpm -qa|grep gluster
glusterfs-server-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-rdma-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-regression-tests-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-devel-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-resource-agents-3.7dev-0.994.gitf522001.el6.noarch
glusterfs-libs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-fuse-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-geo-replication-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-cli-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-devel-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-extra-xlators-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-debuginfo-3.7dev-0.994.gitf522001.el6.x86_64

How reproducible:
================
easily

Steps to Reproduce:
==================
1.create a EC volume
2.add files to it after mounting
3.Now attach a tier and try to edit one of those existing files, it can be seen that while editing u get error, but on force edit, the  file with edited contents get saved in hot tier, while the old file copy still exists as <filename> ~ 


Additional info:
=================
[root@ninja ~]# ls /rhs/brick*/rhat*
/rhs/brick1/rhatvol-10:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9~

/rhs/brick1/rhatvol-2:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9~

/rhs/brick2/rhatvol-12:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9~

/rhs/brick2/rhatvol-4:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9~

/rhs/brick3/rhatvol-6:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9~

/rhs/brick3/rhatvol-tier:
f2  f9  newfile

/rhs/brick4/rhatvol-8:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9~

--- Additional comment from Anand Avati on 2015-04-24 10:56:30 EDT ---

REVIEW: http://review.gluster.org/10370 (ctr/xlator: Named lookup heal of pre-existing files, before ctr was ON.) posted (#2) for review on master by Joseph Fernandes (josferna@redhat.com)

--- Additional comment from Anand Avati on 2015-04-29 05:36:07 EDT ---

REVIEW: http://review.gluster.org/10370 (ctr/xlator: Named lookup heal of pre-existing files, before ctr was ON.) posted (#3) for review on master by Joseph Fernandes (josferna@redhat.com)

--- Additional comment from Anand Avati on 2015-05-02 03:08:02 EDT ---

REVIEW: http://review.gluster.org/10370 (ctr/xlator: Named lookup heal of pre-existing files, before ctr was ON.) posted (#4) for review on master by Joseph Fernandes (josferna@redhat.com)

--- Additional comment from Anand Avati on 2015-05-03 03:01:37 EDT ---

REVIEW: http://review.gluster.org/10370 (ctr/xlator: Named lookup heal of pre-existing files, before ctr was ON.) posted (#5) for review on master by Joseph Fernandes (josferna@redhat.com)

--- Additional comment from Anand Avati on 2015-05-03 14:47:07 EDT ---

REVIEW: http://review.gluster.org/10370 (ctr/xlator: Named lookup heal of pre-existing files, before ctr was ON.) posted (#6) for review on master by Joseph Fernandes (josferna@redhat.com)

--- Additional comment from Anand Avati on 2015-05-05 14:52:04 EDT ---

REVIEW: http://review.gluster.org/10370 (ctr/xlator: Named lookup heal of pre-existing files, before ctr was ON.) posted (#8) for review on master by Dan Lambright (dlambrig@redhat.com)

--- Additional comment from Anand Avati on 2015-05-06 07:42:32 EDT ---

COMMIT: http://review.gluster.org/10370 committed in master by Vijay Bellur (vbellur@redhat.com) 
------
commit cb11dd91a6cc296e4a3808364077f4eacb810e48
Author: Joseph Fernandes <josferna@redhat.com>
Date:   Fri Apr 24 19:22:44 2015 +0530

    ctr/xlator: Named lookup heal of pre-existing files, before ctr was ON.
    
    Problem: The CTR xlator records file meta (heat/hardlinks)
    into the data. This works fine for files which are created
    after ctr xlator is switched ON. But for files which were
    created before CTR xlator is ON, CTR xlator is not able to
    record either of the meta i.e heat or hardlinks. Thus making
    those files immune to promotions/demotions.
    
    Solution: The solution that is implemented in this patch is
    do ctr-db heal of all those pre-existent files, using named lookup.
    For this purpose we use the inode-xlator context variable option
    in gluster.
    The inode-xlator context variable for ctr xlator will have the
    following,
        a. A Lock for the context variable
        b. A hardlink list: This list represents the successful looked
           up hardlinks.
    These are the scenarios when the hardlink list is updated:
    1) Named-Lookup: Whenever a named lookup happens on a file, in the
       wind path we copy all required hardlink and inode information to
       ctr_db_record structure, which resides in the frame->local variable.
       We dont update the database in wind. During the unwind, we read the
       information from the ctr_db_record and ,
       Check if the inode context variable is created, if not we create it.
       Check if the hard link is there in the hardlink list.
          If its not there we add it to the list and send a update to the
          database using libgfdb.
          Please note: The database transaction can fail(and we ignore) as there
          already might be a record in the db. This update to the db is to heal
          if its not there.
          If its there in the list we ignore it.
    2) Inode Forget: Whenever an inode forget hits we clear the hardlink list in
       the inode context variable and delete the inode context variable.
       Please note: An inode forget may happen for two reason,
       a. when the inode is delete.
       b. the in-memory inode is evicted from the inode table due to cache limits.
    3) create: whenever a create happens we create the inode context variable and
       add the hardlink. The database updation is done as usual by ctr.
    4) link: whenever a hardlink is created for the inode, we create the inode context
       variable, if not present, and add the hardlink to the list.
    5) unlink: whenever a unlink happens we delete the hardlink from the list.
    6) mknod: same as create.
    7) rename: whenever a rename happens we update the hardlink in list. if the hardlink
       was not present for updation, we add the hardlink to the list.
    
    What is pending:
    1) This solution will only work for named lookups.
    2) We dont track afr-self-heal/dht-rebalancer traffic for healing.
    
    Change-Id: Ia4bbaf84128ad6ce8c3ddd70bcfa82894c79585f
    BUG: 1212037
    Signed-off-by: Joseph Fernandes <josferna@redhat.com>
    Signed-off-by: Dan Lambright <dlambrig@redhat.com>
    Reviewed-on: http://review.gluster.org/10370
    Reviewed-by: Jeff Darcy <jdarcy@redhat.com>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Tested-by: NetBSD Build System
    Reviewed-by: Vijay Bellur <vbellur@redhat.com>

--- Additional comment from Niels de Vos on 2015-05-15 09:07:31 EDT ---

This change should not be in "ON_QA", the patch posted for this bug is only available in the master branch and not in a release yet. Moving back to MODIFIED until there is an beta release for the next GlusterFS version.

Comment 3 Nag Pavan Chilakam 2015-07-04 11:59:35 UTC
Still seeing the issue on latest build
[root@scarface ecvol]# ls -l
total 4
drwxr-xr-x. 3 root root 101 Jul  4 17:26 cdir
-rw-r--r--. 1 root root 270 Jul  4 17:22 cf1
-rw-r--r--. 1 root root 205 Jul  4 17:18 cf1~
-rw-r--r--. 1 root root 329 Jul  4 17:26 cf2
-rw-r--r--. 1 root root 205 Jul  4 17:19 cf2~
[root@scarface ecvol]# 


attached are the logs

Comment 4 Nag Pavan Chilakam 2015-07-04 12:07:12 UTC
Created attachment 1046004 [details]
cli logs for failed_qa

Comment 5 Nag Pavan Chilakam 2015-07-04 12:08:05 UTC
sosreports of failed logs @ [qe-admin@rhsqe-repo failed_qa_logs]$ pwd
/home/repo/sosreports/bug.1229250/failed_qa_logs

Comment 6 Nag Pavan Chilakam 2015-07-04 12:13:54 UTC
client sos reports at [qe-admin@rhsqe-repo:/home/repo/sosreports/bug.1229250/failed_qa_logs/client.sosreports


Note You need to log in before you can comment on or make changes to this bug.