Bug 1277112 - Data Tiering:File create and new writes to existing file fails when the hot tier is full instead of redirecting/flushing the data to cold tier
Summary: Data Tiering:File create and new writes to existing file fails when the hot t...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: ---
Assignee: Nithya Balachandran
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard: tier-migration
Depends On: 1259312
Blocks: 1260923 1268895
TreeView+ depends on / blocked
 
Reported: 2015-11-02 12:07 UTC by Nag Pavan Chilakam
Modified: 2019-04-03 09:15 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
When hot tier storage is full, write operations such as file creation or new writes to existing files fail with a 'No space left on device' error, instead of redirecting writes or flushing data to cold tier storage. Workaround: If the hot tier is not completely full, it is possible to work around this issue by waiting for the next CTR promote/demote cycle before continuing with write operations. If the hot tier does fill completely, administrators can copy a file from the hot tier to a safe location, delete the original file from the hot tier, and wait for demotion to free more space on the hot tier before copying the file back.
Clone Of: 1259312
Environment:
Last Closed: 2018-02-06 17:34:55 UTC
Embargoed:


Attachments (Terms of Use)

Description Nag Pavan Chilakam 2015-11-02 12:07:42 UTC
+++ This bug was initially created as a clone of Bug #1259312 +++

Description of problem:
======================
given that the hot tier is mostly a costly storage space, it is highly possible for the hot tier to have very less disk size compared to the cold tier.
So, in this problem, while I a, doing writes to a file and if the hot tier is full, the new writes fail. Also, Iam able to create any more new files even though the cold tier is largely free



Version-Release number of selected component (if applicable):
=========================================================
[root@nag-manual-node1 brick999]# gluster --version
glusterfs 3.7.3 built on Aug 27 2015 01:23:05
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@nag-manual-node1 brick999]# rpm -qa|grep gluster
glusterfs-libs-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-fuse-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-server-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-api-3.7.3-0.82.git6c4096f.el6.x86_64
glusterfs-cli-3.7.3-0.82.git6c4096f.el6.x86_64
python-gluster-3.7.3-0.82.git6c4096f.el6.noarch
glusterfs-client-xlators-3.7.3-0.82.git6c4096f.el6.x86_64
[root@nag-manual-node1 brick999]# 


How reproducible:
=====================
easily

Steps to Reproduce:
====================
1.have a cold tier with huge space and hot tier with say about only 1GB space
2.Now turn on ctr and set the demote freq to say a large value like 3600(1hr)
3.Now fill the volume with about 990MB space(which will go to hot tier)
4. Now create a file (may be using dd command) which is about 100Mb
5. Now while creating , the file fails to acccomodate any more writes after 10MB as hot tier is full
6. Also try to create new files(to see if new files go to cold tier, as hot si filled)


Actual results:
===============
Exisin file writes and new file creates fail when hot tier is full, even though cold  tier is largely free.

Expected results:
=================
Make new writes /file creates to go to cold tier when hot tier is full or flush the relatively cold files in hot tier to accomodate new files, irrespective of demote freq.

Additional info:


Work-Around:
==========
Wait for the next CTR promote/demote cycle to kick in




CLI Log:
==========
[root@nag-manual-nfsclient1 srt]# dd if=/dev/urandom of=junkrandom.120m bs=1024 count=120000
120000+0 records in
120000+0 records out
122880000 bytes (123 MB) copied, 30.1008 s, 4.1 MB/s
[root@nag-manual-nfsclient1 srt]# ll
total 1320000
-rw-r--r--. 1 root root 1228800000 Sep  2 22:10 junkrandom
-rw-r--r--. 1 root root  122880000 Sep  2 22:13 junkrandom.120m
[root@nag-manual-nfsclient1 srt]# du -sh *
1.2G	junkrandom
118M	junkrandom.120m
[root@nag-manual-nfsclient1 srt]# cp junkrandom.120m junkrandom.120m1
========creating a 300MB file when there is hardly 35MB free
[root@nag-manual-nfsclient1 srt]# dd if=/dev/urandom of=bricklimit bs=1024 count=3000000
dd: writing `bricklimit': No space left on device
160892+0 records in
160891+0 records out
164752384 bytes (165 MB) copied, 35.6738 s, 4.6 MB/s
[root@nag-manual-nfsclient1 srt]# dd if=/dev/urandom of=bricklimit.1 bs=1024 count=3000000
dd: opening `bricklimit.1': No space left on device
[root@nag-manual-nfsclient1 srt]# 
[root@nag-manual-nfsclient1 srt]# 
[root@nag-manual-nfsclient1 srt]# 
[root@nag-manual-nfsclient1 srt]# ls -l
total 1475652
-rw-r--r--. 1 root root   47190016 Sep  2 22:17 bricklimit
-rw-r--r--. 1 root root 1228800000 Sep  2 22:10 junkrandom
-rw-r--r--. 1 root root  122880000 Sep  2 22:13 junkrandom.120m
-rw-r--r--. 1 root root  122880000 Sep  2 22:14 junkrandom.120m1
[root@nag-manual-nfsclient1 srt]# touch newf1
touch: cannot touch `newf1': No space left on device
[root@nag-manual-nfsclient1 srt]# touch newf1
touch: cannot touch `newf1': No space left on device
[root@nag-manual-nfsclient1 srt]# touch newf1
touch: cannot touch `newf1': No space left on device
[root@nag-manual-nfsclient1 srt]# du -sh *
35M	bricklimit
1.2G	junkrandom
118M	junkrandom.120m
118M	junkrandom.120m1
[root@nag-manual-nfsclient1 srt]#

--- Additional comment from nchilaka on 2015-09-02 08:11:50 EDT ---

sosreports @ 
rhsqe-repo.lab.eng.blr.redhat.com:/home/repo/sosreports/bug.1259312/

--- Additional comment from nchilaka on 2015-09-02 08:14:12 EDT ---



--- Additional comment from nchilaka on 2015-10-13 05:40:15 EDT ---

This is failing on glusterfs-server-3.7.5-0.18

[root@rhel7-autofuseclient estonia]# for i in {1..100};do touch x.$i;done
touch: cannot touch ‘x.1’: No space left on device
touch: cannot touch ‘x.2’: No space left on device
touch: cannot touch ‘x.3’: No space left on device
touch: cannot touch ‘x.4’: No space left on device
touch: cannot touch ‘x.5’: No space left on device
touch: cannot touch ‘x.6’: No space left on device
touch: cannot touch ‘x.7’: No space left on device
touch: cannot touch ‘x.8’: No space left on device
touch: cannot touch ‘x.9’: No space left on device
touch: cannot touch ‘x.10’: No space left on device
touch: cannot touch ‘x.11’: No space left on device
touch: cannot touch ‘x.12’: No space left on device
touch: cannot touch ‘x.13’: No space left on device
touch: cannot touch ‘x.14’: No space left on device
touch: cannot touch ‘x.15’: No space left on device
touch: cannot touch ‘x.16’: No space left on device
touch: cannot touch ‘x.17’: No space left on device
touch: cannot touch ‘x.18’: No space left on device
touch: cannot touch ‘x.19’: No space left on device
touch: cannot touch ‘x.20’: No space left on device
touch: cannot touch ‘x.21’: No space left on device
touch: cannot touch ‘x.22’: No space left on device
touch: cannot touch ‘x.23’: No space left on device
touch: cannot touch ‘x.24’: No space left on device
touch: cannot touch ‘x.25’: No space left on device
touch: cannot touch ‘x.26’: No space left on device
touch: cannot touch ‘x.27’: No space left on device
touch: cannot touch ‘x.28’: No space left on device
touch: cannot touch ‘x.29’: No space left on device
touch: cannot touch ‘x.30’: No space left on device
touch: cannot touch ‘x.31’: No space left on device
touch: cannot touch ‘x.32’: No space left on device
touch: cannot touch ‘x.33’: No space left on device
touch: cannot touch ‘x.34’: No space left on device
touch: cannot touch ‘x.35’: No space left on device
touch: cannot touch ‘x.36’: No space left on device
touch: cannot touch ‘x.37’: No space left on device
touch: cannot touch ‘x.38’: No space left on device
touch: cannot touch ‘x.39’: No space left on device
touch: cannot touch ‘x.40’: No space left on device
touch: cannot touch ‘x.41’: No space left on device
touch: cannot touch ‘x.42’: No space left on device
touch: cannot touch ‘x.43’: No space left on device
touch: cannot touch ‘x.44’: No space left on device
touch: cannot touch ‘x.45’: No space left on device
touch: cannot touch ‘x.46’: No space left on device
touch: cannot touch ‘x.47’: No space left on device
touch: cannot touch ‘x.48’: No space left on device
touch: cannot touch ‘x.49’: No space left on device
touch: cannot touch ‘x.50’: No space left on device
touch: cannot touch ‘x.51’: No space left on device
touch: cannot touch ‘x.52’: No space left on device
touch: cannot touch ‘x.53’: No space left on device
touch: cannot touch ‘x.54’: No space left on device
touch: cannot touch ‘x.55’: No space left on device
touch: cannot touch ‘x.56’: No space left on device



refer bz#1271151

--- Additional comment from Dan Lambright on 2015-10-26 13:42:11 EDT ---

My cannot do anything with write updates, but my understanding is DHT has a mechanism to redirect new file creates to a different brick when the hashed sub volume is full.  Need to understand why this does not work with tier.

--- Additional comment from Nithya Balachandran on 2015-10-30 04:39:40 EDT ---

What was the name of the volume on which this failed?

--- Additional comment from Nithya Balachandran on 2015-10-30 05:15:33 EDT ---

I tried this on the latest master code and could not reproduce the issue where new file creates were still going to the hashed subvol. 

We cannot do anything about the writes until the file is moved to the cold tier - this is existing DHT behaviour.


I am moving this to WorksForMe. Please reopen if seen again.

--- Additional comment from Nithya Balachandran on 2015-10-30 05:15:33 EDT ---

I tried this on the latest master code and could not reproduce the issue where new file creates were still going to the hashed subvol. 

We cannot do anything about the writes until the file is moved to the cold tier - this is existing DHT behaviour.


I am moving this to WorksForMe. Please reopen if seen again.

--- Additional comment from nchilaka on 2015-11-02 07:05:02 EST ---

Hi,
I saw this not working on latest downstream.
I am cloning this to downstream for further tracking


===========================
[root@mia diskfull]# for i in {1..10};do dd if=/dev/urandom of=bigfi bs=1024 count=300000;done
dd: failed to open ‘bigfi’: No space left on device
dd: failed to open ‘bigfi’: No space left on device
dd: failed to open ‘bigfi’: No space left on device
dd: failed to open ‘bigfi’: No space left on device
dd: failed to open ‘bigfi’: No space left on device
dd: failed to open ‘bigfi’: No space left on device
dd: failed to open ‘bigfi’: No space left on device
dd: failed to open ‘bigfi’: No space left on device
dd: failed to open ‘bigfi’: No space left on device
dd: failed to open ‘bigfi’: No space left on device



fuse mount logs:
==================
d 9 times between [2015-11-02 04:40:08.609241] and [2015-11-02 04:40:08.723177]
The message "E [MSGID: 114031] [client-rpc-fops.c:251:client3_3_mknod_cbk] 0-diskfull-client-6: remote operation failed. Path: /bigfi [No space left on device]" repeated 9 times between [2015-11-02 04:40:08.609351] and [2015-11-02 04:40:08.723197]
^C
[root@mia glusterfs]# tail -f mnt-diskfull.log 
[2015-11-02 04:40:08.635409] W [fuse-bridge.c:1978:fuse_create_cbk] 0-glusterfs-fuse: 30100: /bigfi => -1 (No space left on device)
[2015-11-02 04:40:08.647871] W [fuse-bridge.c:1978:fuse_create_cbk] 0-glusterfs-fuse: 30102: /bigfi => -1 (No space left on device)
[2015-11-02 04:40:08.660484] W [fuse-bridge.c:1978:fuse_create_cbk] 0-glusterfs-fuse: 30104: /bigfi => -1 (No space left on device)
[2015-11-02 04:40:08.673046] W [fuse-bridge.c:1978:fuse_create_cbk] 0-glusterfs-fuse: 30106: /bigfi => -1 (No space left on device)
[2015-11-02 04:40:08.685465] W [fuse-bridge.c:1978:fuse_create_cbk] 0-glusterfs-fuse: 30108: /bigfi => -1 (No space left on device)
[2015-11-02 04:40:08.698471] W [fuse-bridge.c:1978:fuse_create_cbk] 0-glusterfs-fuse: 30110: /bigfi => -1 (No space left on device)
[2015-11-02 04:40:08.710918] W [fuse-bridge.c:1978:fuse_create_cbk] 0-glusterfs-fuse: 30112: /bigfi => -1 (No space left on device)
[2015-11-02 04:40:08.723667] W [fuse-bridge.c:1978:fuse_create_cbk] 0-glusterfs-fuse: 30114: /bigfi => -1 (No space left on device)
The message "E [MSGID: 114031] [client-rpc-fops.c:251:client3_3_mknod_cbk] 0-diskfull-client-7: remote operation failed. Path: /bigfi [No space left on device]" repeated 9 times between [2015-11-02 04:40:08.609241] and [2015-11-02 04:40:08.723177]
The message "E [MSGID: 114031] [client-rpc-fops.c:251:client3_3_mknod_cbk] 0-diskfull-client-6: remote operation failed. Path: /bigfi [No space left on device]" repeated 9 times between [2015-11-02 04:40:08.609351] and [2015-11-02 04:40:08.723197]

Comment 10 Nithya Balachandran 2016-01-25 08:06:41 UTC
The doc text looks fine. We might also want to say that if it does hit 100% , the admin should copy a file off the mount to some safe place, then wait for files to get demoted before copying it back to the volume.

Comment 12 Nithya Balachandran 2016-01-27 04:28:02 UTC
Thanks Laura. This looks good.

Comment 13 Raghavendra G 2016-01-27 16:39:59 UTC
Here is a summary of discussions I had with Rafi related to this problem:

1. Creates falling on full hot tier.
   
   We need to have a min-free-disk check when we create files on hot tier. The reason min-free-disk might not be working as of now is that we create a linkto file on cold subvol and then create data file blindly on hot tier. Before trying to create a linkto file on cold subvol, we should check whether hot tier has enough space and only if that is true we should create a linkto file on cold tier and a data file on hot tier.

2. Writes to a full hot tier.

   We can add a min-free-disk check in writev_cbk. If the hot tier is getting full, client can instruct ctr (by lowering the heat) to consider the file for demotion. More details can be worked out (like constant migration of files b/w hot and cold tier when large number of files are being accessed etc). The advantage with tier when compared to dht is that we don't have to change layout to initiate migration. We can leverage ctr to migrate files to cold tier.

Comment 19 vnosov 2017-11-29 23:39:29 UTC
The bug is reproduced on GlusterFS 3.12.3.

Comment 20 Shyamsundar 2018-02-06 17:34:55 UTC
Thank you for your bug report.

This bug has documentation updated on the problem reported.

Further we are no longer releasing any bug fixes or, other updates for Tier. This bug will be set to CLOSED WONTFIX to reflect this.

Comment 21 vnosov 2018-02-06 18:40:31 UTC
Shyamsundar,

What does it mean to "no longer releasing any bug fixes or, other updates for Tier"? Is the Tiering feature being deprecated or desupported?

Thanks!

Viktor Nosov.


Note You need to log in before you can comment on or make changes to this bug.