Description of problem: Since Queens (perhaps Pike?), nova-compute reserved host memory by default is 4096. This doesn't work out well in low memory environments: ~~~ (overcloud) [stack@undercloud-1 ~]$ grep 4096 -C10 /usr/share/openstack-tripleo-heat-templates/puppet/services/nova-compute.yaml processes. Ex. NovaVcpuPinSet: ['4-12','^8'] will reserve cores from 4-12 excluding 8 type: comma_delimited_list default: [] tags: - role_specific NovaReservedHostMemory: description: > Reserved RAM for host processes. type: number default: 4096 constraints: - range: { min: 512 } tags: - role_specific MonitoringSubscriptionNovaCompute: default: 'overcloud-nova-compute' type: string NovaComputeLoggingSource: type: json default: ~~~ ~~~ [root@overcloud-compute-0 ~]# grep reserv /var/lib/config-data/puppet-generated/nova_libvirt -R | egrep -v ':#' /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf:reserved_host_memory_mb=4096 [root@overcloud-compute-0 ~]# ~~~ ~~~ (overcloud) [stack@undercloud-1 ~]$ nova hypervisor-show d787b5c3-d83f-4246-9338-5083bfbb6058 | grep mb | free_ram_mb | -1 | | memory_mb | 4095 | | memory_mb_used | 4096 | (overcloud) [stack@undercloud-1 ~]$ ~~~ The templates provide a low-memory-usage.yaml file: /usr/share/openstack-tripleo-heat-templates/environments/low-memory-usage.yaml Change that .yaml to include NovaReservedHostMemory: 512 ~~~ # Lower the memory usage of overcloud. parameter_defaults: CinderWorkers: 1 GlanceWorkers: 1 HeatWorkers: 1 KeystoneWorkers: 1 NeutronWorkers: 1 NovaWorkers: 1 SaharaWorkers: 1 SwiftWorkers: 1 GnocchiMetricdWorkers: 1 ApacheMaxRequestWorkers: 100 ApacheServerLimit: 100 ControllerExtraConfig: 'nova::network::neutron::neutron_url_timeout': '60' DatabaseSyncTimeout: 900 # Override defaults to get HEALTH_OK with 1 OSD (for testing only) CephPoolDefaultSize: 1 CephPoolDefaultPgNum: 32 NovaReservedHostMemory: 512 ~~~
Assigning to Compute DFG for triage
There is a patch submitted upstream [1], which needs to be backported to OSP13. [1] https://review.openstack.org/#/c/577938/
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3587