Bug 1024754 - [SVVP]SVVP cannot execute with Max guest with 256 and 48 cpus on AMD host as the guest was slowly and hardly to respond to HCK server
Summary: [SVVP]SVVP cannot execute with Max guest with 256 and 48 cpus on AMD host as...
Keywords:
Status: CLOSED DUPLICATE of bug 1136803
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.5
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Radim Krčmář
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-10-30 11:22 UTC by Min Deng
Modified: 2015-01-27 14:45 UTC (History)
18 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-01-27 14:45:24 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Bugzilla 1069309 None None None Never

Internal Links: 1069309

Description Min Deng 2013-10-30 11:22:22 UTC
Description of problem:
SVVP cannot test on Max guest with 256 and 48 cpu on AMD host
Version-Release number of selected component (if applicable):
qemu-kvm-rhev 415
kernel 425&424
How reproducible:
4 times
Steps to Reproduce:
1.N_REPEAT=1 ;
while true;
do date;
echo "test round: $N_REPEAT" ;
N_REPEAT=$(($N_REPEAT+1)) &&
/usr/libexec/qemu-kvm --nodefaults --nodefconfig -m 256G -smp 64 -cpu Opteron_G3 -M rhel6.5.0 -usb -device usb-tablet,id=tablet0 -drive file=win2012-AMD-MAX.raw,if=none,id=drive-virtio0-0-0,format=raw,werror=stop,rerror=stop,cache=none,serial=number -device virtio-blk-pci,drive=drive-virtio0-0-0,id=virti0-0-0,bootindex=1 -netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet0,id=net0,mac=57:60:35:39:36:53 -uuid 3852f9b8-cf84-4c76-9257-13f62faad916 -monitor stdio -vnc :1 -vga cirrus -name win2012-AMD-MAX -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -cdrom en_windows_server_2012_x64_dvd_915478.iso -boot menu=on -device usb-ehci,id=ehci0 -drive file=usb-storage-amd-max.raw,if=none,id=drive-usb-2-0,media=disk,format=raw,cache=none,werror=stop,rerror=stop,aio=threads -device usb-storage,bus=ehci0.0,drive=drive-usb-2-0,id=usb-2-0,removable=on -rtc base=localtime,clock=host,driftfix=slew -chardev socket,id=111a,path=/tmp/monitor-win2012-amd-max,server,nowait -mon chardev=111a,mode=readline
done
2.submit job to hck
3.the job 

Actual results:
The job are hardly executed due to the guest so slowly and reduce the cpus to 32 ,it will become well.
Expected results:
The job can be executed

Additional info:
AMD host,512G

Comment 2 RHEL Product and Program Management 2013-11-04 03:26:22 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 3 Qunfang Zhang 2013-11-06 05:12:18 UTC
Min,

What is the host cpu number?

Comment 4 Min Deng 2013-11-06 05:32:38 UTC
(In reply to Qunfang Zhang from comment #3)
> Min,
> 
> What is the host cpu number?
 
  Sorry,my mistake.the above CLI should be 48 cpu number.
  And the host cpu is 
  #cat /proc/cpuinfo |grep "processor"|sort -u|wc -l
  48

Comment 5 Min Deng 2013-11-06 06:18:15 UTC
 For this bug,it affected SVVP because the following,
  1.The guest became very slowly and it was hardly to install HCK client.
  3.Even if the hck client was installed and then reset the guest from hck server,the guest will change to debug status.The SVVP testing could not continue.

Comment 6 Radim Krčmář 2014-06-17 11:56:34 UTC
Big guests on Intel are not affected?

Does the time spent in guest vary significantly between 32 and 48 VCPUs?  Or,
to streamline the process: can I get access to that host?
(If I understand it correctly, the guest is unusable for normal operations too,
 so HCK server should not be necessary.)

Thanks.

Comment 7 Qunfang Zhang 2014-06-18 03:08:09 UTC
Hi, Min

Could you help reply Radim's question in comment 6?   Thanks!

Comment 8 Min Deng 2014-06-18 06:20:14 UTC
(In reply to Radim Krčmář from comment #6)
> Big guests on Intel are not affected?
  Yes
> 
> Does the time spent in guest vary significantly between 32 and 48 VCPUs? 
  Yes,the time spent in guest very significantly between 32 and 48 vcpus.
> to streamline the process: can I get access to that host?
  Other team charge it now so we have to wait.As soon as it is available I will notice you right now.
> (If I understand it correctly, the guest is unusable for normal operations
> too,
>  so HCK server should not be necessary.)
> 
> Thanks.

Comment 10 Radim Krčmář 2015-01-27 14:45:24 UTC
The bug most likely has the same source as bug 1136803 -- too many timer reads to do any useful work.  Windows reads the timer on every CPU at set intervals -- this interval can be shortened from OS.  When we have many VCPUs and a short interval, not much other work is done because QEMU has to serialize reads.

Bug 1136803 has more information though, so closing this one.

*** This bug has been marked as a duplicate of bug 1136803 ***


Note You need to log in before you can comment on or make changes to this bug.