Hide Forgot
Description of problem: Do kvm block regresson test w/ (1Gb link) nfs backend between rhel6.2GA and rhel6.3 beta1 hosts .there is 35% performance regression in large file sequential writes. Detailed result: ================================================================ raw format in LargeFile Creates(256KB) w/kernel-257 guest threads| IOPS| Thro(MBps)| rhel6.2GA 1| 195.35| 48.8| r63 kernel-267 qemu-272 1| 126.29| 31.6| diff % | -35.4| -35.2| qcow2 format in LargeFile Creates(256KB) w/kernel-257 guest threads| IOPS| Thro(MBps)| rhel6.2GA 1| 199.77| 49.9| r63 kernel-267 qemu-272 1| 130.61| 32.7| diff % | -34.6| -34.5| ================================================================ Version-Release number of selected component (if applicable): (host)kernel-2.6_32-262 qemu-kvm-0.12.1.2-257 (guest)kernel-2.6_32-257 How reproducible: always Steps to Reproduce: 1.mount 1gb netapp server in client. cat /proc/mounts 192.168.0.113:/vol/s2wquan116171nfs /home/kvm_autotest_root/images nfs rw,sync,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,noac,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.0.113,mountvers=3,mountport=4046,mountproto=tcp,local_lock=none,addr=192.168.0.113 0 0 3. create 20G raw/qcow2 images. qemu-img create -f raw /home/kvm_autotest_root/images/storage2.raw 20G qemu-img create -f qcow2 /home/kvm_autotest_root/images/storage2.qcow2 110G -o preallocation=metadata 4.Pass this raw/qcow2 file image to the KVM guest as a block device and use cache=none aio=threads for the KVM guest. /usr/libexec/qemu-kvm -name vm1 -nodefaults -vga std -chardev socket,id=qmp_monitor_id_qmpmonitor1,path=/tmp/monitor-qmpmonitor1-20120418-021037-GUK9,server,nowait -mon chardev=qmp_monitor_id_qmpmonitor1,mode=control -chardev socket,id=serial_id_20120418-021037-GUK9,path=/tmp/serial-20120418-021037-GUK9,server,nowait -device isa-serial,chardev=serial_id_20120418-021037-GUK9 -device ich9-usb-uhci1,id=usb1,bus=pci.0,addr=0x4 -drive file=/usr/local/autotest/tests/kvm/images/RHEL-Server-6.3-64-virtio.raw,index=0,if=none,id=drive-virtio-disk1,media=disk,cache=none,boot=off,snapshot=off,readonly=off,format=raw,aio=threads -device virtio-blk-pci,bus=pci.0,addr=0x5,drive=drive-virtio-disk1,id=virtio-disk1 -drive file=/usr/local/autotest/tests/kvm/images/test.raw,index=2,if=none,id=drive-virtio-disk2,media=disk,cache=none,boot=off,snapshot=off,readonly=off,format=raw,aio=threads -device virtio-blk-pci,bus=pci.0,addr=0x6,drive=drive-virtio-disk2,id=virtio-disk2 -device virtio-net-pci,netdev=idExu1N2,mac=9a:ff:15:58:09:ce,id=ndev00idExu1N2,bus=pci.0,addr=0x3 -netdev tap,id=idExu1N2,vhost=on -m 4096 -smp 2,cores=1,threads=1,sockets=2 -cpu Westmere -device usb-tablet,id=usb-tablet1,bus=usb1.0 -vnc :1 -vga qxl -rtc base=utc,clock=host,driftfix=slew -M rhel6.3.0 -boot order=cdn,once=c,menu=off -no-kvm-pit-reinjection -enable-kvm 4)In the KVM guest (running RHEL6.3-257), write filed in second disk (dev/vdb). ,then format the virtual disk with ext4 and mount the virtual disk in the guest. mkfs.ext4 /dev/vdb; mount /dev/vdb /mnt 5)run ffsb test.define direct io is on in configuration of ffsb test # ffsb large_file_creates_256k_1.ffsb Actual results: Expected results: Additional info: this bug may caused by linearized QEMU hack removal patch in bug #767606.
Created attachment 578965 [details] ffsb large_file_creates_256k_1.profile
Hi Wenli, thanks for the report. Can you please test the qemu version prior to bug #767606 fix? We'll need to decide which component/release introduced the regression. Please help us do the bisect; Why didn't you move #767606 back into assigned state if you're sure that's the source cause as noted in https://bugzilla.redhat.com/show_bug.cgi?id=767606#c11
(In reply to comment #3) > Hi Wenli, thanks for the report. > Can you please test the qemu version prior to bug #767606 fix? > We'll need to decide which component/release introduced the regression. Please > help us do the bisect; Since qemu version-237 which is prior to bug #767606 fix was deleted in brew,I rebuild qemu-kvm-237 packages in https://brewweb.devel.redhat.com/taskinfo?taskID=4326145. Do same test w/ switching qemu-kvm back to 237(keep same host/guest kernel), get about 49Mb/s in 256kb sequential writes on raw format, it is quite same with rhel6.2 result in comment #0. > > Why didn't you move #767606 back into assigned state if you're sure that's the > source cause as noted in https://bugzilla.redhat.com/show_bug.cgi?id=767606#c11 yes, I should move it back to assigned immediately rather then waiting for customer's reply.
From Paolo's last update on bug 767606 (c#16): "Kernel bug reopened as bug 815265, moved QE tracker to bug 814617, using that one to post the revert."
Verified it w 2.6.32-269.el6.x86_64 and qemu-kvm-0.12.1.2-2.287.el6 by using same steps in comment #0. The result of FFSB on large file sequential writes(256k) is 46.9 Mb/s , there is no regression compared with rhel6.2host. Base on it,change it to verified.
Technical note added. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. New Contents: No Documentation Needed
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2012-0746.html