Bug 1416843 - metadata_workers controls number os nova_api processes
Summary: metadata_workers controls number os nova_api processes
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 10.0 (Newton)
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Diana Clarke
QA Contact: Prasanth Anbalagan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-01-26 15:11 UTC by Andreas Karis
Modified: 2020-03-11 15:39 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-01-27 18:53:11 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Andreas Karis 2017-01-26 15:11:29 UTC
I played around a bit more with the parameters in /etc/nova/nova.conf, and the parameter which determines the number of nova-api processes seems to be metadata_workers. Shouldn't this be osapi_compute_workers?
~~~
[root@undercloud-1 ~]# grep 'workers=' /etc/nova/nova.conf 
#osapi_compute_workers=<None>
osapi_compute_workers=4
#metadata_workers=<None>
metadata_workers=2
#workers=<None>
workers=8
~~~

~~~
[root@undercloud-1 ~]# systemctl restart openstack-nova-api
ps au[root@undercloud-1 ~]# ps aux | grep nova-api
nova     20400 61.8  0.6 368232 98996 ?        Ss   09:56   0:03 /usr/bin/python2 /usr/bin/nova-api
nova     20457  8.3  0.6 374852 101972 ?       S    09:56   0:00 /usr/bin/python2 /usr/bin/nova-api
nova     20458  8.3  0.6 374852 101976 ?       S    09:56   0:00 /usr/bin/python2 /usr/bin/nova-api
root     20490  0.0  0.0 112648   964 pts/0    R+   09:56   0:00 grep --color=auto nova-api
~~~

Changing this to 13 leads to 13+1 processes:
~~~
[root@undercloud-1 ~]# grep 'workers=' /etc/nova/nova.conf 
#osapi_compute_workers=<None>
osapi_compute_workers=4
#metadata_workers=<None>
metadata_workers=13
#workers=<None>
workers=8
[root@undercloud-1 ~]# systemctl restart openstack-nova-api
[root@undercloud-1 ~]# ps aux | grep nova-api
nova     22052 62.6  0.6 368232 99000 ?        Ss   09:58   0:03 /usr/bin/python2 /usr/bin/nova-api
nova     22099 13.0  0.6 374852 101968 ?       S    09:58   0:00 /usr/bin/python2 /usr/bin/nova-api
nova     22100 13.0  0.6 374852 101968 ?       S    09:58   0:00 /usr/bin/python2 /usr/bin/nova-api
nova     22101 13.0  0.6 374852 101968 ?       S    09:58   0:00 /usr/bin/python2 /usr/bin/nova-api
nova     22102 13.0  0.6 374852 101968 ?       S    09:58   0:00 /usr/bin/python2 /usr/bin/nova-api
nova     22103 13.0  0.6 374852 101968 ?       S    09:58   0:00 /usr/bin/python2 /usr/bin/nova-api
nova     22104 13.0  0.6 374852 101968 ?       S    09:58   0:00 /usr/bin/python2 /usr/bin/nova-api
nova     22105 13.0  0.6 374960 101968 ?       S    09:58   0:00 /usr/bin/python2 /usr/bin/nova-api
nova     22106 13.0  0.6 374864 101968 ?       S    09:58   0:00 /usr/bin/python2 /usr/bin/nova-api
nova     22107 12.5  0.6 374864 101968 ?       S    09:58   0:00 /usr/bin/python2 /usr/bin/nova-api
nova     22108 12.5  0.6 374964 101972 ?       S    09:58   0:00 /usr/bin/python2 /usr/bin/nova-api
nova     22109 13.0  0.6 374864 101968 ?       S    09:58   0:00 /usr/bin/python2 /usr/bin/nova-api
nova     22110 12.5  0.6 374864 101968 ?       S    09:58   0:00 /usr/bin/python2 /usr/bin/nova-api
nova     22111 12.5  0.6 374964 101972 ?       S    09:58   0:00 /usr/bin/python2 /usr/bin/nova-api
root     22136  0.0  0.0 112652   964 pts/0    S+   09:58   0:00 grep --color=auto nova-api
~~~

Comment 2 Diana Clarke 2017-01-27 18:53:11 UTC
(In reply to Andreas Karis from comment #0)
> I played around a bit more with the parameters in /etc/nova/nova.conf, and
> the parameter which determines the number of nova-api processes seems to be
> metadata_workers. Shouldn't this be osapi_compute_workers?

Both configuration options (metadata_workers, osapi_compute_workers) determine the number of nova-api processes.

A new nova-api process is created for each worker in each of the `enabled_apis`.

 - https://github.com/openstack/nova/blob/d7b5be1e3c1c0e5c6ae429173967aea55c2f0609/nova/cmd/api.py#L56

  for each enabled api:
      for each worker:
          create nova-api process

The `enabled_apis` configuration option defaults to ['osapi_compute', 'metadata'].

 - https://github.com/openstack/nova/blob/b69e68a7bee7a749a501db213d54f4dbe42da3d5/nova/conf/service.py#L83

When nova-api processes are created for the 'metadata' service, it uses the `metadata_workers` configuration option.

When nova-api processes are created for the 'osapi_compute' service, if uses the 'osapi_compute_workers' configuration option.

 - https://github.com/openstack/nova/blob/f9d7b383a7cb12b6cd3e6117daf69b08620bf40f/nova/service.py#L319

In the above scenarios you mentioned, what was `enabled_apis` set to in your `/etc/nova/nova.conf`?

The following knowledge base article you mentioned in the customer portal case will need to be updated. In particular, this part: "the parameter which determines the number of nova-api processes is
actually metadata_workers".

 - https://access.redhat.com/solutions/2890171

I'm going to mark this bug as closed, but I'd still like to see your nova conf file and the `enabled_apis` setting, just in case I've missed something.

Hope that helps!

Cheers,

--diana

Comment 3 Andreas Karis 2017-01-30 15:00:34 UTC
[root@undercloud-1 ~]# cat /etc/nova/nova.conf  | egrep -v '^#|^$'
[DEFAULT]
auth_strategy=keystone
use_forwarded_for=False
fping_path=/usr/sbin/fping
rootwrap_config=/etc/nova/rootwrap.conf
allow_resize_to_same_host=False
reserved_host_memory_mb=0
ram_allocation_ratio=1.0
sync_power_state_interval=-1
heal_instance_info_cache_interval=60
force_config_drive=True
default_floating_pool=nova
dhcp_domain=
use_neutron=True
notify_on_state_change=vm_and_task_state
notify_api_faults=False
state_path=/var/lib/nova
scheduler_host_subset_size=1
scheduler_use_baremetal_filters=False
scheduler_available_filters=tripleo_common.filters.list.tripleo_filters
scheduler_default_filters=RetryFilter,TripleOCapabilitiesFilter,ComputeCapabilitiesFilter,AvailabilityZoneFilter,RamFilter,DiskFilter,ComputeFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
scheduler_weight_classes=nova.scheduler.weights.all_weighers
scheduler_host_manager=ironic_host_manager
scheduler_driver=filter_scheduler
max_io_ops_per_host=8
max_instances_per_host=50
scheduler_max_attempts=30
report_interval=10
enabled_apis=metadata
osapi_compute_listen=192.0.2.1
osapi_compute_listen_port=8774
osapi_compute_workers=2
metadata_listen=192.0.2.1
metadata_listen_port=8775
metadata_workers=2
compute_manager=ironic.nova.compute.manager.ClusteredComputeManager
service_down_time=60
compute_driver=ironic.IronicDriver
vif_plugging_is_fatal=True
vif_plugging_timeout=300
firewall_driver=nova.virt.firewall.NoopFirewallDriver
force_raw_images=True
debug=True
log_dir=/var/log/nova
rpc_response_timeout=600
transport_url=rabbit://ae0b47e8f11dc525a7dd76d49a9a2f10a7fe5a10:8c19a867359618220d3eb7626ab25876f2d24a58@192.0.2.1//
rpc_backend=rabbit
image_service=nova.image.glance.GlanceImageService
osapi_volume_listen=192.0.2.1
[api_database]
connection=mysql+pymysql://nova_api:c244d7a89b15672e23ab78e1ab36c9db88bcf946@192.0.2.1/nova_api
[barbican]
[cache]
[cells]
[cinder]
catalog_info=volumev2:cinderv2:publicURL
[cloudpipe]
[conductor]
workers=2
[cors]
[cors.subdomain]
[crypto]
[database]
connection=mysql+pymysql://nova:c244d7a89b15672e23ab78e1ab36c9db88bcf946@192.0.2.1/nova
[ephemeral_storage_encryption]
[glance]
api_servers=http://192.0.2.1:9292
[guestfs]
[hyperv]
[image_file_url]
[ironic]
api_endpoint=http://192.0.2.1:6385/v1
admin_username=ironic
admin_password=099d9727bbb666717369f6a331ade525c0a74c06
admin_url=http://192.0.2.1:35357/v2.0
admin_tenant_name=service
[key_manager]
[keystone_authtoken]
auth_uri=http://192.0.2.1:5000/v3
auth_type=password
username=nova
project_name=service
auth_url=http://192.0.2.1:35357
password=c244d7a89b15672e23ab78e1ab36c9db88bcf946
[libvirt]
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url=http://192.0.2.1:9696
region_name=
ovs_bridge=br-int
extension_sync_interval=600
service_metadata_proxy=False
timeout=30
auth_type=v3password
auth_url=http://192.0.2.1:5000/v3
project_name=service
project_domain_name=Default
username=neutron
user_domain_name=Default
password=bdfe7aa78eeb38c49d68872d891bc84f8c6d6c77
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
driver=messaging
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
enable_proxy_headers_parsing=False
[oslo_policy]
policy_file=/etc/nova/policy.json
[placement]
[placement_database]
[rdp]
[remote_debug]
[serial_console]
[spice]
[ssl]
[trusted_computing]
[upgrade_levels]
[vmware]
[vnc]
enabled=False
[workarounds]
[wsgi]
api_paste_config=api-paste.ini
[xenserver]
[xvp]
[root@undercloud-1 ~]#

Comment 4 Andreas Karis 2017-01-30 15:16:22 UTC
The above configuration is from Director. Which means that only the metadata_workers parameter will have an impact.

controller nodes have:
~~~
[root@overcloud-controller-0 ~]# grep enabled_api /etc/nova/nova.conf 
enabled_apis=osapi_compute,metadata
~~~

So just to make that clear .. the general formula is:
~~~
1 + osapi_compute_workers * ( 1 if "osapi_compute" in enabled_apis else 0) + metadata_workers * ( 1 if "metadata" in enabled_apis else 0)
~~~

With:
~~~
>>> osapi_compute_workers
4
>>> metadata_workers
3
>>> enabled_apis=[ "osapi_compute", "metadata" ]
>>> 1 + osapi_compute_workers * ( 1 if "osapi_compute" in enabled_apis else 0) + metadata_workers * ( 1 if "metadata" in enabled_apis else 0)
8
>>> enabled_apis=[  "metadata" ]
>>> 1 + osapi_compute_workers * ( 1 if "osapi_compute" in enabled_apis else 0) + metadata_workers * ( 1 if "metadata" in enabled_apis else 0)
4
>>> enabled_apis=[ "osapi_compute"]
>>> 1 + osapi_compute_workers * ( 1 if "osapi_compute" in enabled_apis else 0) + metadata_workers * ( 1 if "metadata" in enabled_apis else 0)
5
~~~

Comment 5 Diana Clarke 2017-01-30 15:47:47 UTC
(In reply to Andreas Karis from comment #3)
> [root@undercloud-1 ~]# cat /etc/nova/nova.conf  | egrep -v '^#|^$'
> [DEFAULT]
> enabled_apis=metadata

Thanks for providing the nova.conf. That value for enabled_apis lines up with the number of nova-api processes you mentioned in the bug description scenarios, so I'm confident that this is working as expected.

Have a great week!

Comment 6 Andreas Karis 2017-01-30 16:08:41 UTC
Hi,

Thanks for the detailed explanation, it's greatly appreciated.  I updated the KCS with it.

Regards,

Andreas

Comment 7 awaugama 2017-08-30 17:55:23 UTC
WONTFIX/NOTABUG therefore QE Won't automate


Note You need to log in before you can comment on or make changes to this bug.