Bug 1073763
| Summary: | network.compression fails simple '--ioengine=sync' fio test | |||
|---|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | josh <josh> | |
| Component: | compression-xlator | Assignee: | bugs <bugs> | |
| Status: | CLOSED EOL | QA Contact: | ||
| Severity: | high | Docs Contact: | ||
| Priority: | unspecified | |||
| Version: | 3.5.0 | CC: | bugs | |
| Target Milestone: | --- | Keywords: | Triaged | |
| Target Release: | --- | |||
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | Doc Type: | Bug Fix | ||
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1174016 (view as bug list) | Environment: | ||
| Last Closed: | 2016-06-17 15:56:43 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | 1174016 | |||
| Bug Blocks: | ||||
|
Description
josh@wrale.com
2014-03-07 06:20:41 UTC
I just ran the same two tests on my HDD bricks (the first two tests were on SSD bricks). I obtained the same result (volumes ending in -n002 have compression enabled, where volumes ending in -n001 do not):
[root@core-n1 hdd-vol-benchmark-n002]# fio --size=20g --bs=64k --rw=write --ioengine=sync --name=fio.write.out.1
fio.write.out.1: (g=0): rw=write, bs=64K-64K/64K-64K/64K-64K, ioengine=sync, iodepth=1
fio-2.0.13
Starting 1 process
fio.write.out.1: Laying out IO file(s) (1 file(s) / 20480MB)
fio: pid=28809, err=5/file:engines/sync.c:67, func=xfer, error=Input/output error
fio.write.out.1: (groupid=0, jobs=1): err= 5 (file:engines/sync.c:67, func=xfer, error=Input/output error): pid=28809: Fri Mar 7 01:40:19 2014
write: io=262144 B, bw=32000KB/s, iops=625 , runt= 8msec
clat (usec): min=40 , max=1037 , avg=441.50, stdev=462.21
lat (usec): min=43 , max=1041 , avg=444.75, stdev=462.63
clat percentiles (usec):
| 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 40],
| 30.00th=[ 114], 40.00th=[ 114], 50.00th=[ 114], 60.00th=[ 572],
| 70.00th=[ 572], 80.00th=[ 1032], 90.00th=[ 1032], 95.00th=[ 1032],
| 99.00th=[ 1032], 99.50th=[ 1032], 99.90th=[ 1032], 99.95th=[ 1032],
| 99.99th=[ 1032]
lat (usec) : 50=20.00%, 250=20.00%, 750=20.00%
lat (msec) : 2=20.00%
cpu : usr=0.00%, sys=0.00%, ctx=8, majf=0, minf=47
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=16.7%, 4=83.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=5/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
WRITE: io=256KB, aggrb=32000KB/s, minb=32000KB/s, maxb=32000KB/s, mint=8msec, maxt=8msec
[root@core-n1 hdd-vol-benchmark-n002]#
Bug 1174016 has been filed to get this fixed in the mainline version. When patches become available, we can backport these to the release-3.5 branch. This bug is getting closed because the 3.5 is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release. |