Bug 1401445 - heat resource-list to use more than 1 CPU
Summary: heat resource-list to use more than 1 CPU
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-heat
Version: 10.0 (Newton)
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Zane Bitter
QA Contact: Amit Ugol
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-12-05 09:46 UTC by Ronnie Rasouli
Modified: 2016-12-05 16:19 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-12-05 16:19:44 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Ronnie Rasouli 2016-12-05 09:46:07 UTC
Description of problem:
heat resource-list -n 50 overcloud 
Could be very slow, we noticed that the command doesn't use more than 1 CPU while running it.

pidstat 1 -C 'heat-engine'
Linux 3.10.0-512.el7.x86_64 (undercloud-0.redhat.local) 12/04/2016
_x86_64_ (8 CPU)

01:44:23 PM   UID       PID    %usr %system  %guest    %CPU   CPU  Command
01:44:24 PM   187     30195    0.99    0.00    0.00    0.99     6  heat-engine
01:44:24 PM   187     30229    0.99    0.00    0.00    0.99     7  heat-engine
01:44:24 PM   187     30234  100.00    0.00    0.00  100.00     3  heat-engine

01:44:24 PM   UID       PID    %usr %system  %guest    %CPU   CPU  Command
01:44:25 PM   187     30195    1.00    0.00    0.00    1.00     4  heat-engine
01:44:25 PM   187     30233    1.00    0.00    0.00    1.00     2  heat-engine
01:44:25 PM   187     30234  100.00    1.00    0.00  100.00     3  heat-engine

01:44:25 PM   UID       PID    %usr %system  %guest    %CPU   CPU  Command
01:44:26 PM   187     30195    1.00    0.00    0.00    1.00     7  heat-engine
01:44:26 PM   187     30231    1.00    0.00    0.00    1.00     1  heat-engine
01:44:26 PM   187     30234  100.00    0.00    0.00  100.00     3  heat-engine

01:44:26 PM   UID       PID    %usr %system  %guest    %CPU   CPU  Command
01:44:27 PM   187     30195    1.00    1.00    0.00    2.00     5  heat-engine
01:44:27 PM   187     30232    1.00    0.00    0.00    1.00     0  heat-engine
01:44:27 PM   187     30234  100.00    0.00    0.00  100.00     3  heat-engine

01:44:27 PM   UID       PID    %usr %system  %guest    %CPU   CPU  Command
01:44:28 PM   187     30195    1.00    0.00    0.00    1.00     7  heat-engine
01:44:28 PM   187     30234  100.00    0.00    0.00  100.00     3  heat-engine

01:44:28 PM   UID       PID    %usr %system  %guest    %CPU   CPU  Command
01:44:29 PM   187     30195    1.00    0.00    0.00    1.00     5  heat-engine
01:44:29 PM   187     30234   99.00    0.00    0.00   99.00     3  heat-engine
01:44:29 PM   187     30235    1.00    0.00    0.00    1.00     6  heat-engine

01:44:29 PM   UID       PID    %usr %system  %guest    %CPU   CPU  Command
01:44:30 PM   187     30195    1.00    0.00    0.00    1.00     6  heat-engine
01:44:30 PM   187     30227    1.00    0.00    0.00    1.00     5  heat-engine
01:44:30 PM   187     30229    1.00    0.00    0.00    1.00     5  heat-engine
01:44:30 PM   187     30234  100.00    0.00    0.00  100.00     3  heat-engine



Version-Release number of selected component (if applicable):
openstack-heat-engine-7.0.0-7.el7ost.noarch

How reproducible:
100%

Steps to Reproduce:
1.
2.
3.

Actual results:
heat resource-list very slow performance. 

There is no difference in number of CPU

heat-engine is capping out a single CPU even though > 1
heat-engine workers are available to take the load.

Expected results:
better heat performance use hyper threading / multi core technology

Additional info:

Comment 1 Thomas Hervé 2016-12-05 09:59:55 UTC
I don't think there is anything to do here. We worked on the performance issue of resource-list, which should be acceptable now. Using several threads isn't relevant.

Comment 2 Zane Bitter 2016-12-05 16:19:44 UTC
Python performance is not improved by multithreading.


Note You need to log in before you can comment on or make changes to this bug.