Bug 666477 - test XFS/FUSE heavy writeback workloads (also swapping too)
test XFS/FUSE heavy writeback workloads (also swapping too)
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kernel (Show other bugs)
5.6
Unspecified Linux
medium Severity medium
: rc
: ---
Assigned To: Red Hat Kernel Manager
Boris Ranto
: TestOnly
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2010-12-30 20:48 EST by CAI Qian
Modified: 2011-01-17 21:22 EST (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2011-01-17 21:22:34 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description CAI Qian 2010-12-30 20:48:16 EST
Description of problem:
This is the test only item from VMM test plan's integration testing section.
https://wiki.test.redhat.com/Kernel/IEEE-VMM#integration

# XFS heavy writeback workloads (also swapping too)
# FUSE heavy writeback workloads 
# writeback/dirty_* tunables
Comment 1 CAI Qian 2010-12-30 21:01:08 EST
This could be something like,
https://bugzilla.redhat.com/show_bug.cgi?id=617035
Comment 3 Boris Ranto 2011-01-05 13:55:52 EST
Did you have any specific device in mind?
What did you mean by 'writeback'? Having hdd with the write back cache turned on?
What kind of swapping? Should I limit the system memory so that swap is needed or any other swapping?
What kind of tunables? mount options or something in /sys/ or /proc/sys/ directories?
Comment 4 CAI Qian 2011-01-05 22:23:10 EST
What I have in mind is,

1) starting xfs/fuse heavy with multiple aggressive writers (better to use a fast 
   storage arrays), so there are pretty of memory is occupying by file cache. 
   Continue the workloads...
2) at the same time, putting the system under memory pressure using like 
   "memhog", so it is starting to swap - "free" shows that used swap is non-zero. 
   This will force the system to reclaim the file cache as you can see from
   /proc/meminfo that Active(file) and Inactive(file) start to decrease. Keep
   this pressure for a few days.
3) releasing the memory pressure, so file cache is increasing.

Additional, the reproducer for BZ#617035 is a good regression test case for a
corner case.

There are also some good test cases upstream for this.
http://lkml.org/lkml/2010/9/3/502
http://lkml.org/lkml/2010/8/30/226
Comment 5 Boris Ranto 2011-01-06 10:20:53 EST
Based on my testing, I've filed the bug 667707.
Comment 6 yanfu,wang 2011-01-07 04:31:26 EST
For FUSE part, test using comment #4 and server works as normal.
[root@ibm-hs21-01 fs_mark]# uname -a
Linux ibm-hs21-01.rhts.eng.nay.redhat.com 2.6.18-232.el5 #1 SMP Mon Nov 15 16:01:45 EST 2010 x86_64 x86_64 x86_64 GNU/Linux
[root@ibm-hs21-01 fs_mark]# mount
...
sshfs#localhost:/mnt/gbfs on /MP1 type fuse (rw,nosuid,nodev,max_read=65536)
[root@ibm-hs21-01 ~]# cat /proc/meminfo 
MemTotal:     16439360 kB
MemFree:      15879776 kB
Buffers:         64192 kB
Cached:         285080 kB
SwapCached:          0 kB
Active:         200728 kB
Inactive:       271460 kB
HighTotal:           0 kB
HighFree:            0 kB
LowTotal:     16439360 kB
LowFree:      15879776 kB
SwapTotal:    18481144 kB
SwapFree:     18481144 kB
Dirty:               8 kB
Writeback:           0 kB
AnonPages:      122952 kB
Mapped:          23680 kB
Slab:            38740 kB
PageTables:       8572 kB
NFS_Unstable:        0 kB
Bounce:              0 kB
CommitLimit:  26700824 kB
Committed_AS:   266244 kB
VmallocTotal: 34359738367 kB
VmallocUsed:    273904 kB
VmallocChunk: 34359462763 kB
HugePages_Total:     0
HugePages_Free:      0
HugePages_Rsvd:      0
Hugepagesize:     2048 kB
[root@ibm-hs21-01 ~]# free -m
             total       used       free     shared    buffers     cached
Mem:         16054        546      15507          0         62        278
-/+ buffers/cache:        205      15848
Swap:        18047          0      18047
[root@ibm-hs21-01 ~]# cat /proc/swaps 
Filename				Type		Size	Used	Priority
/dev/mapper/VolGroup00-LogVol01         partition	18481144	0	-1

run dd under memory pressure:
[root@ibm-hs21-01 MP1]# while true;do memhog 15g; done
[root@ibm-hs21-01 MP1]# dd if=/dev/zero of=test
or
[root@ibm-hs21-01 MP1]# for i in $(seq 1 10); do dd if=/dev/zero of=test$i bs=4096 count=2M; done

"free" shows that used swap is non-zero and see from /proc/meminfo that Active(file) and Inactive(file) start to decrease:
[root@ibm-hs21-01 ~]# free -m
             total       used       free     shared    buffers     cached
Mem:         16054       6652       9401          0          0        889
-/+ buffers/cache:       5761      10292
Swap:        18047        143      17904
[root@ibm-hs21-01 ~]# cat /proc/meminfo |grep ctive
Active:       15387268 kB
Inactive:       822796 kB
[root@ibm-hs21-01 ~]# cat /proc/meminfo |grep ctive
Active:        8511224 kB
Inactive:       927640 kB
[root@ibm-hs21-01 ~]# free -m
             total       used       free     shared    buffers     cached
Mem:         16054       9867       6186          0          0        470
-/+ buffers/cache:       9396       6657
Swap:        18047        160      17887


then run fs_mark under memory pressure for a long time, results shown normal like above:
[root@ibm-hs21-01 fs_mark]# ./fs_mark  -d  /MP1  -D  256  -n  100000  -t  4  -s  20480  -F  -S  0  -l fill.log

I can't use memhog on i686 which shown "Function not implemented", but using fs_mark benchmark and dd to test for a long time, the server works normally.
[root@hp-dl360g6-02 ~]# memhog 5000m
mbind: Function not implemented
get_mempolicy: Function not implemented
[root@hp-dl360g6-02 ~]# memhog 8g membind
Kernel doesn't support NUMA policy

Note You need to log in before you can comment on or make changes to this bug.