Bug 1596048 - dd command responses after very long when pvc get full
Summary: dd command responses after very long when pvc get full
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gluster-block
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Pranith Kumar K
QA Contact: Nitin Goyal
URL:
Whiteboard:
Depends On: RHGS-Bad-IO
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-06-28 07:14 UTC by Nitin Goyal
Modified: 2019-02-12 07:10 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-02-12 07:10:20 UTC
Embargoed:


Attachments (Terms of Use)

Description Nitin Goyal 2018-06-28 07:14:04 UTC
Description of problem:One pod busybox has been created with block device pvc. I have 10 gb of pvc in that busybox pod. I was doing ios with dd command on that busybox pod. It is working fine when pvc was not full. When pvc got full it is 
responding very slow. It took 41 min to response for 100mb file.


Version-Release number of selected component (if applicable):
gluster-block-0.2.1-20.el7rhgs.x86_64

How reproducible:


Steps to Reproduce:
1. create one pvc.
2. create one busybox pod with that pvc
3. do ios on that pod
# for i in $(seq 1 10); do echo ""; time dd if=/dev/zero of=file$i bs=1M count=100; echo file$i; done

Actual results:
dd: writing 'file202': No space left on device
98+0 records in
97+0 records out
101711872 bytes (97.0MB) copied, 2515.234989 seconds, 39.5KB/s
Command exited with non-zero status 1
real	41m 55.23s
user	0m 0.00s
sys	0m 0.18s


Expected results:
It is taking very long (41 minn) to give me error, It should not take that much time

Comment 2 Nitin Goyal 2018-06-28 07:19:53 UTC
Output of running command

/usr/share/busybox # for i in $(seq 191 205); do echo ""; time dd if=/dev/zero of=file$i bs=1M count=100; echo file$i; done

100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 4.276900 seconds, 23.4MB/s
real	0m 4.27s
user	0m 0.00s
sys	0m 0.10s
file191

100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 19.172164 seconds, 5.2MB/s
real	0m 19.17s
user	0m 0.00s
sys	0m 0.14s
file192

100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 49.030076 seconds, 2.0MB/s
real	0m 49.03s
user	0m 0.00s
sys	0m 0.17s
file193

100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 38.406676 seconds, 2.6MB/s
real	0m 38.40s
user	0m 0.00s
sys	0m 0.17s
file194

100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 33.227814 seconds, 3.0MB/s
real	0m 33.22s
user	0m 0.00s
sys	0m 0.18s
file195

100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 34.326837 seconds, 2.9MB/s
real	0m 34.32s
user	0m 0.00s
sys	0m 0.17s
file196

100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 32.628058 seconds, 3.1MB/s
real	0m 32.62s
user	0m 0.00s
sys	0m 0.18s
file197

100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 32.564316 seconds, 3.1MB/s
real	0m 32.56s
user	0m 0.00s
sys	0m 0.19s
file198

100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 30.110446 seconds, 3.3MB/s
real	0m 30.11s
user	0m 0.00s
sys	0m 0.17s
file199

100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 27.492989 seconds, 3.6MB/s
real	0m 27.49s
user	0m 0.00s
sys	0m 0.19s
file200

100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 26.993135 seconds, 3.7MB/s
real	0m 26.99s
user	0m 0.00s
sys	0m 0.19s
file201

dd: writing 'file202': No space left on device
98+0 records in
97+0 records out
101711872 bytes (97.0MB) copied, 2515.234989 seconds, 39.5KB/s
Command exited with non-zero status 1
real	41m 55.23s
user	0m 0.00s
sys	0m 0.18s
file202

dd: writing 'file203': No space left on device
1+0 records in
0+0 records out
0 bytes (0B) copied, 0.001249 seconds, 0B/s
Command exited with non-zero status 1
real	0m 0.00s
user	0m 0.00s
sys	0m 0.00s
file203

dd: writing 'file204': No space left on device
1+0 records in
0+0 records out
0 bytes (0B) copied, 0.001133 seconds, 0B/s
Command exited with non-zero status 1
real	0m 0.00s
user	0m 0.00s
sys	0m 0.00s
file204

dd: writing 'file205': No space left on device
1+0 records in
0+0 records out
0 bytes (0B) copied, 0.001107 seconds, 0B/s
Command exited with non-zero status 1
real	0m 0.00s
user	0m 0.00s
sys	0m 0.00s
file205
/usr/share/busybox # command terminated with exit code 137

Comment 3 Nitin Goyal 2018-06-28 08:07:58 UTC
logs and sosreports

http://rhsqe-repo.lab.eng.blr.redhat.com/cns/bugs/BZ-1596048/

Comment 4 Pranith Kumar K 2018-07-06 09:16:42 UTC
(In reply to Nitin Goyal from comment #2)

> 
> 100+0 records in
> 100+0 records out
> 104857600 bytes (100.0MB) copied, 26.993135 seconds, 3.7MB/s
> real	0m 26.99s
> user	0m 0.00s
> sys	0m 0.19s
> file201
> 
> dd: writing 'file202': No space left on device
> 98+0 records in
> 97+0 records out
> 101711872 bytes (97.0MB) copied, 2515.234989 seconds, 39.5KB/s
> Command exited with non-zero status 1
> real	41m 55.23s
> user	0m 0.00s
> sys	0m 0.18s
> file202
> 
> dd: writing 'file203': No space left on device
> 1+0 records in
> 0+0 records out
> 0 bytes (0B) copied, 0.001249 seconds, 0B/s
> Command exited with non-zero status 1
> real	0m 0.00s
> user	0m 0.00s
> sys	0m 0.00s
> file203

Brian,
    In this deployment a preallocated file from glusterfs is exported as disk inside the applicatoin pod, it is formatted with XFS and we are seeing that when the diskspace is nearing full, dd takes 41minutes to give the error for 100MB dd. This issue is consistently reproducible. Do you have any insights as to why this may happen and what we should do to find what could be the problem?

Pranith

Comment 5 Pranith Kumar K 2018-07-06 10:57:24 UTC
Nitin,
     Had a chat with Brian Foster. He is suspecting badness going on with the buffered write -ENOSPC retry sequence to free up internal block reservations in hopes of allowing the write to succeed. But ~40m seems pretty bad and he needs the following information to figure out what could be leading to such bad performance.

1) get the fs to a state where a single dd command will take forever
2) trace-cmd record -e -o <path-to-output-file> xfs:* <dd command>
3) Run the above command for a few minutes and attach that file to this bz.

Pranith

Comment 6 Yaniv Kaul 2018-07-08 07:11:49 UTC
Can you also try without buffering, just to see if this may have something to do with it? (add 'oflag=direct' to your 'dd' command)

Comment 7 Pranith Kumar K 2018-07-09 11:18:33 UTC
(In reply to Yaniv Kaul from comment #6)
> Can you also try without buffering, just to see if this may have something
> to do with it? (add 'oflag=direct' to your 'dd' command)

Yaniv,
     I guess we need to fix the issue in the case where the I/O is buffered. What is the outcome you are expecting with the suggested steps?

Pranith

Comment 8 Yaniv Kaul 2018-07-09 11:25:19 UTC
(In reply to Pranith Kumar K from comment #7)
> (In reply to Yaniv Kaul from comment #6)
> > Can you also try without buffering, just to see if this may have something
> > to do with it? (add 'oflag=direct' to your 'dd' command)
> 
> Yaniv,
>      I guess we need to fix the issue in the case where the I/O is buffered.
> What is the outcome you are expecting with the suggested steps?
> 
> Pranith

I'm trying to assess the severity of the issue. Not many real life applications will do serial IO with 1MB chunks with buffering, that's all. Many perform direct IO, many use sync in between, many do additional reads or wait for a write, or what not (which gives more time to flush), etc.

So I'm trying to narrow down the use case, to a specific issue.

(Or in other words, if you see this in real life application scenarios, it is a much more interesting bug).

Comment 11 Amar Tumballi 2018-11-19 08:09:07 UTC
This is not being actively worked upon right now, and we should plan to work on this as a overall I/O performance issue. Need to re-access the priority.

Comment 12 Prasanna Kumar Kalever 2019-02-07 08:08:21 UTC
Do we still see this behavior on latest OCS-3.11.1 ?
There were new perf improvement patches for block workload which are recently added to core glusterfs, so just guessing if that fixed this.

@Nitin, could you please confirm ?


Note You need to log in before you can comment on or make changes to this bug.