Description of problem: Backport of bug https://bugs.launchpad.net/nova/+bug/1473308 to RHOSP8. Fix (Mitaka): https://review.openstack.org/#/c/200630/ Version-Release number of selected component (if applicable): nova-scheduler: 12.0.4.4 Steps to Reproduce: 1.Create a VM with NUMATopologyFilter (configured by Host Aggergate and Flavor) 2.If the first compute doesn't match the hugepage size, an exception will raise and the operation is abort (no host available). Actual results: an exception will raise and operation is abort (no host available) Expected results: scheduler should continue to the next host and should . Additional info: Fix is available for Mitaka since March, It's a very short modification in the code.
node 2 gets selected when node 1 does not satisfy the hugepages criteria # yum list installed | grep openstack-nova openstack-nova-api.noarch 1:12.0.4-16.el7ost @rhelosp-8.0-puddle openstack-nova-cert.noarch 1:12.0.4-16.el7ost @rhelosp-8.0-puddle openstack-nova-common.noarch 1:12.0.4-16.el7ost @rhelosp-8.0-puddle openstack-nova-compute.noarch 1:12.0.4-16.el7ost @rhelosp-8.0-puddle openstack-nova-conductor.noarch 1:12.0.4-16.el7ost @rhelosp-8.0-puddle openstack-nova-console.noarch 1:12.0.4-16.el7ost @rhelosp-8.0-puddle openstack-nova-novncproxy.noarch 1:12.0.4-16.el7ost @rhelosp-8.0-puddle openstack-nova-scheduler.noarch 1:12.0.4-16.el7ost @rhelosp-8.0-puddle node1 ===== # cat /proc/meminfo | grep Huge AnonHugePages: 157696 kB HugePages_Total: 256 HugePages_Free: 256 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB node2 ====== # cat /proc/meminfo | grep Huge AnonHugePages: 1468416 kB HugePages_Total: 4 HugePages_Free: 2 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 1048576 kB [root@seal27 ~(keystone_admin)]# # nova flavor-show 100 +----------------------------+---------------------------------------------------------------+ | Property | Value | +----------------------------+---------------------------------------------------------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 5 | | extra_specs | {"hw:cpu_policy": "dedicated", "hw:mem_page_size": "1048576"} | | id | 100 | | name | m1.pinned | | os-flavor-access:is_public | True | | ram | 2048 | | rxtx_factor | 1.0 | | swap | | | vcpus | 1 | +----------------------------+---------------------------------------------------------------+ # nova list +--------------------------------------+------+--------+------------+-------------+---------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------+--------+------------+-------------+---------------------+ | b4345499-9c68-48d8-9149-dc276775ef7a | vm1 | ACTIVE | - | Running | public=172.24.4.228 | +--------------------------------------+------+--------+------------+-------------+---------------------+ # nova show vm1 | grep host | OS-EXT-SRV-ATTR:host | node2 | | OS-EXT-SRV-ATTR:hypervisor_hostname | node2 | hostId | 92570924d2576283fd550dbb94c707509298cbfe9f62c6eb8cf2cabc | #
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-2713.html