Bug 768299 - [cd24be007c64bd10d8c28e8e9b1c988478a32c8c] stress test with certain arguments fail when quota is enabled
Summary: [cd24be007c64bd10d8c28e8e9b1c988478a32c8c] stress test with certain arguments...
Keywords:
Status: CLOSED DUPLICATE of bug 801364
Alias: None
Product: GlusterFS
Classification: Community
Component: quota
Version: mainline
Hardware: Unspecified
OS: Unspecified
high
low
Target Milestone: ---
Assignee: Raghavendra G
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-12-16 09:39 UTC by Rahul C S
Modified: 2015-12-01 16:45 UTC (History)
2 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2012-03-14 06:49:27 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Rahul C S 2011-12-16 09:39:37 UTC
Description of problem:
The test was run using stress, which can be downloaded from here:
http://weather.ou.edu/~apw/projects/stress/ 
I ran version 1.0.4,

Command used: 
stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --hdd 4 --verbose 
stress: info: [10396] dispatching hogs: 8 cpu, 4 io, 2 vm, 4 hdd
stress: dbug: [10396] using backoff sleep of 54000us
stress: dbug: [10396] --> hogcpu worker 8 [10397] forked
stress: dbug: [10396] --> hogio worker 4 [10398] forked
stress: dbug: [10396] --> hogvm worker 2 [10399] forked
stress: dbug: [10396] --> hoghdd worker 4 [10400] forked
stress: dbug: [10396] using backoff sleep of 42000us
stress: dbug: [10396] --> hogcpu worker 7 [10401] forked
stress: dbug: [10396] --> hogio worker 3 [10402] forked
stress: dbug: [10396] --> hogvm worker 1 [10403] forked
stress: dbug: [10396] --> hoghdd worker 3 [10404] forked
stress: dbug: [10396] using backoff sleep of 30000us
stress: dbug: [10396] --> hogcpu worker 6 [10405] forked
stress: dbug: [10396] --> hogio worker 2 [10406] forked
stress: dbug: [10396] --> hoghdd worker 2 [10407] forked
stress: dbug: [10396] using backoff sleep of 21000us
stress: dbug: [10396] --> hogcpu worker 5 [10408] forked
stress: dbug: [10396] --> hogio worker 1 [10409] forked
stress: dbug: [10396] --> hoghdd worker 1 [10410] forked
stress: dbug: [10396] using backoff sleep of 12000us
stress: dbug: [10396] --> hogcpu worker 4 [10411] forked
stress: dbug: [10396] using backoff sleep of 9000us
stress: dbug: [10396] --> hogcpu worker 3 [10412] forked
stress: dbug: [10396] using backoff sleep of 6000us
stress: dbug: [10396] --> hogcpu worker 2 [10413] forked
stress: dbug: [10396] using backoff sleep of 3000us
stress: dbug: [10396] --> hogcpu worker 1 [10414] forked
stress: dbug: [10410] seeding 1048575 byte buffer with random data
stress: dbug: [10407] seeding 1048575 byte buffer with random data
stress: dbug: [10403] allocating 134217728 bytes ...
stress: dbug: [10403] touching bytes in strides of 4096 bytes ...
stress: dbug: [10404] seeding 1048575 byte buffer with random data
stress: dbug: [10399] allocating 134217728 bytes ...
stress: dbug: [10399] touching bytes in strides of 4096 bytes ...
stress: dbug: [10400] seeding 1048575 byte buffer with random data
stress: dbug: [10407] opened ./stress.nC07Ld for writing 1073741824 bytes
stress: dbug: [10407] unlinking ./stress.nC07Ld
stress: dbug: [10404] opened ./stress.I9fCTa for writing 1073741824 bytes
stress: dbug: [10404] unlinking ./stress.I9fCTa
stress: dbug: [10410] opened ./stress.Oi1UXd for writing 1073741824 bytes
stress: dbug: [10410] unlinking ./stress.Oi1UXd
stress: dbug: [10400] opened ./stress.7JlaAa for writing 1073741824 bytes
stress: dbug: [10400] unlinking ./stress.7JlaAa
stress: dbug: [10407] fast writing to ./stress.nC07Ld
stress: dbug: [10399] freed 134217728 bytes
stress: dbug: [10399] allocating 134217728 bytes ...
stress: dbug: [10399] touching bytes in strides of 4096 bytes ...
stress: dbug: [10403] freed 134217728 bytes
stress: dbug: [10403] allocating 134217728 bytes ...
stress: dbug: [10403] touching bytes in strides of 4096 bytes ...
stress: dbug: [10404] fast writing to ./stress.I9fCTa
stress: FAIL: [10407] (581) write failed: Cannot allocate memory
stress: dbug: [10410] fast writing to ./stress.Oi1UXd
stress: FAIL: [10396] (394) <-- worker 10407 returned error 1
stress: WARN: [10396] (396) now reaping child worker processes
stress: dbug: [10396] <-- worker 10397 reaped
stress: dbug: [10396] <-- worker 10401 reaped
stress: dbug: [10396] <-- worker 10414 reaped
stress: dbug: [10396] <-- worker 10413 reaped
stress: dbug: [10396] <-- worker 10399 reaped
stress: dbug: [10396] <-- worker 10408 reaped
stress: dbug: [10396] <-- worker 10403 reaped
stress: dbug: [10396] <-- worker 10400 reaped
stress: dbug: [10396] <-- worker 10404 reaped
stress: dbug: [10396] <-- worker 10410 reaped
stress: dbug: [10396] <-- worker 10405 reaped
stress: dbug: [10396] <-- worker 10412 reaped
stress: dbug: [10396] <-- worker 10411 reaped
stress: dbug: [10396] <-- worker 10402 reaped
stress: dbug: [10396] <-- worker 10406 reaped
stress: dbug: [10396] <-- worker 10409 reaped
stress: dbug: [10396] <-- worker 10398 reaped
stress: FAIL: [10396] (451) failed run completed in 2s


The command fails when quota is enabled & option --hdd is given. 

Tried running with valgrind, I did not find much leaks. 
All i see on the client logs are few warnings like these,
2011-12-16 14:35:46.977139] W [quota.c:392:quota_check_limit] 1-fsxvol-quota: cannot find parent for inode (gfid:7d3d9cb0-053b-4c22-a227-980c7d0018dd), henc
e aborting enforcing quota-limits and continuing with the fop
[2011-12-16 14:35:46.977425] W [client3_1-fops.c:690:client3_1_writev_cbk] 1-fsxvol-client-0: remote operation failed: Cannot allocate memory
[2011-12-16 14:35:46.977674] W [fuse-bridge.c:1995:fuse_writev_cbk] 0-glusterfs-fuse: 56704: WRITE => -1 (Cannot allocate memory)
[2011-12-16 14:35:46.977877] W [fuse-bridge.c:1995:fuse_writev_cbk] 0-glusterfs-fuse: 56705: WRITE => -1 (Cannot allocate memory)

& few server infos like these:
[2011-12-16 14:43:23.893368] I [server3_1-fops.c:1238:server_writev_cbk] 0-fsxvol-server: 33: WRITEV 0 (e084b988-a0f9-494a-9fa0-8f9db3aff62e) ==> -1 (Cannot 
allocate memory)
[2011-12-16 14:43:23.895095] I [server3_1-fops.c:1238:server_writev_cbk] 0-fsxvol-server: 34: WRITEV 0 (e084b988-a0f9-494a-9fa0-8f9db3aff62e) ==> -1 (Cannot 
allocate memory)
[2011-12-16 14:43:23.897655] I [server3_1-fops.c:1238:server_writev_cbk] 0-fsxvol-server: 35: WRITEV 0 (e084b988-a0f9-494a-9fa0-8f9db3aff62e) ==> -1 (Cannot 
allocate memory)

Thats why I have set the priority to low.

How reproducible:
Always

Steps to Reproduce:
1. create a volume with 1 brick
2. enable quota for the volume
3. mount via fuse and run stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --hdd 4 --verbose --timeout 60s
  
Actual results:
The test errors out after 5s with exit status non-zero.

Expected results:
The test should complete/timeout after 60s

Additional info:

Comment 1 Amar Tumballi 2012-03-12 09:46:11 UTC
please update these bugs w.r.to 3.3.0qa27, need to work on it as per target milestone set.

Comment 2 Raghavendra G 2012-03-14 06:49:27 UTC

*** This bug has been marked as a duplicate of bug 795789 ***

Comment 3 Raghavendra G 2012-03-14 06:50:52 UTC

*** This bug has been marked as a duplicate of bug 801364 ***


Note You need to log in before you can comment on or make changes to this bug.