Description of problem: ======================= Time sync does not work for an instance that is resumed from SUSPENDED state. Please refer to the following comment from Bug 1040531 for detailed information. https://bugzilla.redhat.com/show_bug.cgi?id=1040531#c31 Version-Release number of selected component (if applicable): ============================================================= openstack-nova-api.noarch 1:14.0.1-1.el7ost @RH7-RHOS-10.0 openstack-nova-cert.noarch 1:14.0.1-1.el7ost @RH7-RHOS-10.0 openstack-nova-common.noarch 1:14.0.1-1.el7ost @RH7-RHOS-10.0 openstack-nova-compute.noarch 1:14.0.1-1.el7ost @RH7-RHOS-10.0 openstack-nova-conductor.noarch 1:14.0.1-1.el7ost @RH7-RHOS-10.0 openstack-nova-console.noarch 1:14.0.1-1.el7ost @RH7-RHOS-10.0 openstack-nova-novncproxy.noarch 1:14.0.1-1.el7ost @RH7-RHOS-10.0 openstack-nova-scheduler.noarch 1:14.0.1-1.el7ost @RH7-RHOS-10.0 How reproducible: ================= Always Steps to Reproduce: ================== 1.nova suspend vm 2.nova resume vm 3.date +%s; virsh domtime <domain> Actual results: ================ Time from step 3 are different. Expected results: ================= Time from step 3 are identical. Additional info: ================ Note that time sync works for PAUSE/UNPAUSE scenario and not SUSPEND/RESUME.
Verified as follows, # yum list installed | grep openstack-nova openstack-nova-api.noarch 1:14.0.2-5.el7ost @rhelosp-10.0-puddle openstack-nova-cert.noarch 1:14.0.2-5.el7ost @rhelosp-10.0-puddle openstack-nova-common.noarch 1:14.0.2-5.el7ost @rhelosp-10.0-puddle openstack-nova-compute.noarch 1:14.0.2-5.el7ost @rhelosp-10.0-puddle openstack-nova-conductor.noarch 1:14.0.2-5.el7ost @rhelosp-10.0-puddle openstack-nova-console.noarch 1:14.0.2-5.el7ost @rhelosp-10.0-puddle openstack-nova-novncproxy.noarch 1:14.0.2-5.el7ost @rhelosp-10.0-puddle openstack-nova-scheduler.noarch 1:14.0.2-5.el7ost @rhelosp-10.0-puddle # glance image-show 3d561057-fa87-4e14-9eea-dd917571ecb4 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | checksum | d13ad524d0276b5eb6dc8a0f75eaae3a | | container_format | bare | | created_at | 2016-11-17T02:01:57Z | | disk_format | qcow2 | | hw_qemu_guest_agent | yes | | id | 3d561057-fa87-4e14-9eea-dd917571ecb4 | | min_disk | 0 | | min_ram | 0 | | name | rhel | | owner | 941f4e77be0947fab6795e97dcc84675 | | protected | False | | size | 525860864 | | status | active | | tags | [] | | updated_at | 2016-11-17T02:02:41Z | | virtual_size | None | | visibility | private | +---------------------+--------------------------------------+ # nova list +--------------------------------------+------+--------+------------+-------------+---------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+--------+------------+-------------+---------------------+ | d01ae2ff-67d8-42d2-b6e9-4e06eca59a92 | vm1 | ACTIVE | - | Running | public=172.24.4.229 | +--------------------------------------+------+--------+------------+-------------+---------------------+ # nova pause vm1 # nova unpause vm1 # virsh list --all Id Name State ---------------------------------------------------- 8 instance-00000007 running # date +%s; virsh domtime 8; 1479349573 Time: 1479349573 # nova suspend vm1 # nova resume vm1 # # virsh list --all Id Name State ---------------------------------------------------- 9 instance-00000007 running # date +%s; virsh domtime 9; 1479349878 Time: 1479349878
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2016-2948.html