Hide Forgot
Description of problem: Boot a rhel5.6.z guest, then savevm/loadvm, run ntpdate -q clock.redhat.com, show offset 3413.441167 sec Version-Release number of selected component (if applicable): [root@localhost ~]# rpm -qa|grep qemu-kvm qemu-kvm-debuginfo-0.12.1.2-2.156.el6.x86_64 qemu-kvm-tools-0.12.1.2-2.156.el6.x86_64 qemu-kvm-0.12.1.2-2.156.el6.x86_64 host kernel: [root@localhost ~]# uname -r 2.6.32-128.el6.x86_64 guest kernel: 2.6.18-238.9.1.el5 How reproducible: 100% Steps to Reproduce: 1.sync host time using ntpdate [root@localhost ~]# ntpdate -q clock.redhat.com server 66.187.233.4, stratum 1, offset 0.026467, delay 0.30315 7 Apr 15:15:02 ntpdate[24929]: adjust time server 66.187.233.4 offset 0.026467 sec [root@localhost ~]# ntpdate -b clock.redhat.com 7 Apr 15:15:12 ntpdate[24932]: step time server 66.187.233.4 offset 0.020460 sec [root@localhost ~]# ntpdate -q clock.redhat.com server 66.187.233.4, stratum 1, offset 0.001438, delay 0.29265 7 Apr 15:15:17 ntpdate[24933]: adjust time server 66.187.233.4 offset 0.001438 sec 2. boot a rhel5.6.z guest: /usr/libexec/qemu-kvm -M rhel6.1.0 -enable-kvm -m 6144 -smp 4 -name rhel5.6-64 -uuid `uuidgen` -rtc base=utc,clock=host,driftfix=slew -no-kvm-pit-reinjection -boot c -drive file=/dev/chayang/RHEL5.6-64,if=none,id=drive-virtio-0-0,media=disk,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio-0-0,id=virt0-0-0 -netdev tap,id=hostnet1 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=52:54:40:81:11:53 -usb -device usb-tablet,id=input1 -vnc :0 -monitor stdio -balloon none -serial unix:/tmp/uni,server,nowait guest kernel line: default=0 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Red Hat Enterprise Linux Server (2.6.18-238.9.1.el5) root (hd0,0) kernel /vmlinuz-2.6.18-238.9.1.el5 ro root=/dev/VolGroup00/LogVol00 rhgb quiet crashkernel=128M@16M console=ttyS0,115200,console=tty0 initrd /initrd-2.6.18-238.9.1.el5.img title Red Hat Enterprise Linux Server (2.6.18-238.el5) root (hd0,0) kernel /vmlinuz-2.6.18-238.el5 ro root=/dev/VolGroup00/LogVol00 rhgb quiet crashkernel=128M@16M clock=tsc initrd /initrd-2.6.18-238.el5.img # dmesg|grep time.c dmesg|grep time.c time.c: Using 1.193182 MHz WALL KVM GTOD KVM timer. time.c: Detected 2293.834 MHz processor. 3.sync time in guest: [root@localhost ~]# ntpdate -q clock.redhat.com ntpdate -q clock.redhat.com server 66.187.233.4, stratum 1, offset 1.508089, delay 0.30530 7 Apr 08:17:59 ntpdate[3087]: step time server 66.187.233.4 offset 1.508089 sec [root@localhost ~]# ntpdate -b clock.redhat.com ntpdate -b clock.redhat.com 7 Apr 08:18:23 ntpdate[3090]: step time server 66.187.233.4 offset 1.518579 sec [root@localhost ~]# ntpdate -q clock.redhat.com ntpdate -q clock.redhat.com server 66.187.233.4, stratum 1, offset 0.003216, delay 0.26236 7 Apr 08:18:29 ntpdate[3093]: adjust time server 66.187.233.4 offset 0.003216 sec 4. savevm/loadvm Actual results: After step 4, run ntpdate in guest, it outputs: [root@localhost ~]# ntpdate -q clock.redhat.com ntpdate -q clock.redhat.com server 66.187.233.4, stratum 1, offset 3413.441167, delay 0.29216 7 Apr 08:30:44 ntpdate[3318]: step time server 66.187.233.4 offset 3413.441167 sec Expected results: Should no time drift. Additional info:
Since RHEL 6.1 External Beta has begun, and this bug remains unresolved, it has been rejected as it is not proposed as exception or blocker. Red Hat invites you to ask your support representative to propose this request, if appropriate and relevant, in the next release of Red Hat Enterprise Linux.
windows2003 64 and RHEL6.0.z 32 bit guest have the same issues. windows2003: /usr/libexec/qemu-kvm -m 4G -smp 4,sockets=2,cores=1,threads=2 -name tt -uuid `uuidgen` -boot dc -drive file=/dev/vg0/win1,if=none,id=drive-virtio0-0-0,format=qcow2 -device virtio-blk-pci,drive=drive-virtio0-0-0,id=ide0-0-0 -net nic,macaddr=21:40:50:12:23:21,vlan=0,model=virtio -net tap,script=/etc/qemu-ifup,vlan=0 -vnc :2 -rtc base=localtime,clock=host,driftfix=slew -no-kvm-pit-reinjection -monitor stdio -balloon virtio rhel6.0.z /usr/libexec/qemu-kvm -m 4G -smp 4,sockets=2,cores=1,threads=2 -name tt -uuid `uuidgen` -boot dc -drive file=/dev/vg0/rhel2,if=none,id=drive-virtio0-0-0,format=qcow2 -device virtio-blk-pci,drive=drive-virtio0-0-0,id=ide0-0-0 -net nic,macaddr=20:40:50:12:23:21,vlan=0,model=virtio -net tap,script=/etc/qemu-ifup,vlan=0 -vnc :1 -rtc base=utc,clock=host,driftfix=slew -no-kvm-pit-reinjection -monitor stdio -balloon virtio
On windows 7 32bit sp1, after savevm, cursor jumping quite quickly, run ntpdate -q clock.redhat.com, it returns 0.000000, will attach the screen shot. After loadvm and wait for a while, cursor seems normal again. /usr/libexec/qemu-kvm -M rhel6.1.0 -enable-kvm -m 2048 -smp 2 -name windows -uuid `uuidgen` -rtc base=localtime,clock=host,driftfix=slew -boot c -drive file=/dev/chayang/win7-32,if=none,id=drive-virtio-0-0,media=disk,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio-0-0,id=virt0-0-0 -netdev tap,id=hostnet1 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=52:54:40:e1:a1:13 -usb -device usb-tablet,id=input1 -spice port=8000,disable-ticketing -monitor stdio -balloon none Additional info: on my AMD host # cat /sys/devices/system/clocksource/clocksource0/current_clocksource tsc
Created attachment 490703 [details] savevm then loadvm
You told the guest to use TSC for clock, then stopped it, then restarted it, which should have restored the TSC but allowed real time to elapse. There is 100% going to be a TSC drift in this scenario, that is not a bug. However, the time drift should not be 3400 seconds - unless you stopped the guest for nearly an hour before restarting it. Can you please confirm how long between the savevm and restorevm? Any Windows issue here is probably separate and should be filed in a separate BZ until we can confirm the issue is the same. Windows does not normally use TSC for tracking time and so the manner of action may be entirely different.
(In reply to comment #6) > You told the guest to use TSC for clock, then stopped it, then restarted it, > which should have restored the TSC but allowed real time to elapse. > > There is 100% going to be a TSC drift in this scenario, that is not a bug. > However, the time drift should not be 3400 seconds - unless you stopped the > guest for nearly an hour before restarting it. Can you please confirm how long > between the savevm and restorevm? > > Any Windows issue here is probably separate and should be filed in a separate > BZ until we can confirm the issue is the same. Windows does not normally use > TSC for tracking time and so the manner of action may be entirely different. Hi Zachary, sorry for the late response. Will update result before today off.
(In reply to comment #6) > You told the guest to use TSC for clock, then stopped it, then restarted it, > which should have restored the TSC but allowed real time to elapse. The guest is using kvmclock. > There is 100% going to be a TSC drift in this scenario, that is not a bug. > However, the time drift should not be 3400 seconds - unless you stopped the > guest for nearly an hour before restarting it. Can you please confirm how long > between the savevm and restorevm? Time drift is 179.786277 when using kvmclock as clock source, and almost 5 minutes elapsed between savevm and loadvm. Is it behaving as expected? Will try tsc clocksouce as well as a x86_64 guest and update here soon. > Any Windows issue here is probably separate and should be filed in a separate > BZ until we can confirm the issue is the same. Windows does not normally use > TSC for tracking time and so the manner of action may be entirely different. Will file a new bug to trace windows issue.
(In reply to comment #9) > (In reply to comment #6) > > You told the guest to use TSC for clock, then stopped it, then restarted it, > > which should have restored the TSC but allowed real time to elapse. > The guest is using kvmclock. I misread the boot parameters and thought you were loading the kernel with the second grub listing, which has clock=tsc - please confirm that's not the case. If so, looks like there is a real bug here. > > > There is 100% going to be a TSC drift in this scenario, that is not a bug. > > However, the time drift should not be 3400 seconds - unless you stopped the > > guest for nearly an hour before restarting it. Can you please confirm how long > > between the savevm and restorevm? > Time drift is 179.786277 when using kvmclock as clock source, and almost 5 > minutes elapsed between savevm and loadvm. Is it behaving as expected? Will try > tsc clocksouce as well as a x86_64 guest and update here soon. That's not expected. > > Any Windows issue here is probably separate and should be filed in a separate > > BZ until we can confirm the issue is the same. Windows does not normally use > > TSC for tracking time and so the manner of action may be entirely different. > Will file a new bug to trace windows issue. Thanks, can you cc me on any time related bugs you file?
(In reply to comment #10) > I misread the boot parameters and thought you were loading the kernel with the > second grub listing, which has clock=tsc - please confirm that's not the case. > > If so, looks like there is a real bug here. > Since this guest is x86 RHEL5, I think it is reliable to check the current clock source by reading from /sys/devices/system/clocksource/clocksource0/current_clocksource instead of from dmesg, so I am sure the guest is using kvm-clock. Guest info: # uname -a Linux localhost.localdomain 2.6.18-238.10.1.el5 #1 SMP Fri Apr 15 06:41:46 EDT 2011 i686 i686 i386 GNU/Linux # cat /sys/devices/system/clocksource/clocksource0/current_clocksource kvm-clock # dmesg |grep -i time.c Real Time Clock Driver v1.12ac > > Time drift is 179.786277 when using kvmclock as clock source, and almost 5 > > minutes elapsed between savevm and loadvm. Is it behaving as expected? Will try > > tsc clocksouce as well as a x86_64 guest and update here soon. > 1. Tied tsc clocksource in i686 RHEL5 guest, time drift is 218.022716 after loadvm and 9 mins elapsed between savevm and loadvm. 2. Tied kvm-clock clocksource in x86_64 RHEL5 guest(2.6.18-238.5.1.el5), time drift is 75.288650 after loadvm and 3 mins elapsed between savevm and loadvm. 3. Tied PIT/TSC clocksource in x86_64 RHEL5 guest(2.6.18-238.5.1.el5), time drift is 519.430574 after loadvm and 10 mins elapsed between savevm and loadvm. # dmesg |grep -i time.c time.c: Using 1.193182 MHz WALL PIT GTOD PIT/TSC timer. time.c: Detected 2660.252 MHz processor. Real Time Clock Driver v1.12ac > > Will file a new bug to trace windows issue. > > Thanks, can you cc me on any time related bugs you file? Sure.
It's expected for TSC based clocksource, not for KVM clock. The kvmclock update should automatically compensate for this and keep the guest clock up to date with real time.
Someone with lab access should take this... Ulrich, would you mind taking a look? If your plate is too full, feel free to reassign to default owner.
(In reply to comment #14) > It's expected for TSC based clocksource, not for KVM clock. The kvmclock > update should automatically compensate for this and keep the guest clock up to > date with real time. It's not related to kvmclock because windows suffers from it too. The problem is probably due to the rtc cmos data. In this case we should close it as 'not a bug'.