Bug 1329544

Summary: Hammer doesnt respond while concurrent 100 host registrations in progress
Product: Red Hat Satellite Reporter: Pradeep Kumar Surisetty <psuriset>
Component: RegistrationAssignee: satellite6-bugs <satellite6-bugs>
Status: CLOSED DUPLICATE QA Contact: Katello QA List <katello-qa-list>
Severity: medium Docs Contact:
Priority: medium    
Version: 6.2.0CC: bbuckingham, cduryee, xdmoon
Target Milestone: UnspecifiedKeywords: Performance, Triaged
Target Release: Unused   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-09-20 18:29:15 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1115190    

Description Pradeep Kumar Surisetty 2016-04-22 08:12:10 UTC
Description of problem:


1) Install satellite 6.2 
2) Install capsule 
3) Register 100 concurrent nodes to capsule/satellite and run hammer command 

[root@satserver ~]# hammer -u admin -p changeme host list
Error: 503 Service Unavailable
[root@satserver ~]# passenger-status 
Version : 4.0.18
Date    : 2016-04-22 03:58:43 -0400
Instance: 20405
----------- General information -----------
Max pool size : 6
Processes     : 6
Requests in top-level queue : 0

----------- Application groups -----------
/usr/share/foreman#default:
  App root: /usr/share/foreman
  Requests in queue: 95
  * PID: 24970   Sessions: 1       Processed: 2141    Uptime: 1h 22m 32s
    CPU: 7%      Memory  : 595M    Last used: 1s ago
  * PID: 26639   Sessions: 1       Processed: 2143    Uptime: 1h 7m 48s
    CPU: 8%      Memory  : 553M    Last used: 0s ago
  * PID: 26668   Sessions: 1       Processed: 2075    Uptime: 1h 7m 48s
    CPU: 10%     Memory  : 776M    Last used: 1s ago
  * PID: 4939    Sessions: 1       Processed: 210     Uptime: 12m 46s
    CPU: 5%      Memory  : 550M    Last used: 0s ago
  * PID: 5239    Sessions: 1       Processed: 173     Uptime: 7m 45s
    CPU: 8%      Memory  : 553M    Last used: 0s ago

/etc/puppet/rack#default:
  App root: /etc/puppet/rack
  Requests in queue: 0
  * PID: 23065   Sessions: 0       Processed: 20      Uptime: 1h 44m 22s
    CPU: 0%      Memory  : 50M     Last used: 14m 23s 



passenger max pool is size and queue is full as shown above. It fails to respond

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:

"hammer -u admin -p changeme host list" fails
Expected results:


"hammer -u admin -p changeme host list"  should list hosts

Additional info:

Comment 1 Pradeep Kumar Surisetty 2016-04-22 08:14:54 UTC
This was noticed in 6.1 too.  Have a work around. 

The most important out-of-the box tunable that should be adjusted is the PassengerMaxPoolSize.  This should be adjusted to 1.5 * Physical CPU Cores available to the Satellite 6.1 server.  This configures the maximum number of processes available for both Foreman and Puppet on Satellite 6.1 and Capsules.  PassengerMaxInstancesPerApp can be used to prevent one application from consuming all available Passenger processes.


My satellite server has 16 cores.  So passengerMaxpoolsize is set to 1.5x , which is 24.  Witt his it helps to do concurrent registration about 100

Global Passenger configuration: /etc/httpd/conf.d/passenger.conf

LoadModule passenger_module modules/mod_passenger.so
<IfModule mod_passenger.c>
   PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-4.0.18/lib/phusion_passenger/locations.ini
   PassengerRuby /usr/bin/ruby
   PassengerMaxPoolSize 24
   PassengerMaxRequestQueueSize 200
   PassengerStatThrottleRate 120
</IfModule>



Foreman Passenger application configuration: /etc/httpd/conf.d/05-foreman-ssl.conf
  PassengerAppRoot /usr/share/foreman
  PassengerRuby /usr/bin/ruby193-ruby
  PassengerMinInstances 6
  PassengerStartTimeout 90
  PassengerMaxPreloaderIdleTime 0
  PassengerMaxRequests 10000
  PassengerPreStart https://example.com

Puppet Passenger application configuration: /etc/httpd/conf.d/25-puppet.conf
  PassengerMinInstances 6
  PassengerStartTimeout 90
  PassengerMaxPreloaderIdleTime 0
  PassengerMaxRequests 10000
  PassengerPreStart https://example.com:8140

Comment 3 Chris Duryee 2016-09-20 18:29:15 UTC

*** This bug has been marked as a duplicate of bug 1163452 ***