Bug 1297691 - LDAP autentication gets stuck and passenger Requests in queue maxed making web-ui inaccessible This website is under heavy load [NEEDINFO]
LDAP autentication gets stuck and passenger Requests in queue maxed making we...
Status: CLOSED DUPLICATE of bug 1277085
Product: Red Hat Satellite 6
Classification: Red Hat
Component: Users & Roles (Show other bugs)
Unspecified Unspecified
high Severity high (vote)
: GA
: --
Assigned To: Daniel Lobato Garcia
Katello QA List
: Triaged
Depends On:
  Show dependency treegraph
Reported: 2016-01-12 04:00 EST by Stefan Nemeth
Modified: 2017-04-25 12:12 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2016-04-04 02:31:48 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
ktordeur: needinfo? (satellite6-bugs)

Attachments (Terms of Use)

  None (edit)
Description Stefan Nemeth 2016-01-12 04:00:27 EST
Description of problem:

When successfully logged in as AD-LDAP user into satellite web ui,

Passenger request queue gets maxed up, no matter how big 

are set up. 

It will return message:
"This website is under heavy load"

If used wrong credentials, users gets rejected and all is OK.

On 6.1.4 everything working 

Version-Release number of selected component (if applicable):

How reproducible:


Steps to Reproduce:
1. configure satellite server to use LDAP settings
2. create user within Active Directory - LDAP
3. log in

Actual results:

This website is under heavy load


ersion : 4.0.18
Date    : Wed Jan 06 09:06:55 +0000 2016
Instance: 2553
----------- General information -----------
Max pool size : 6
Processes     : 6
Requests in top-level queue : 0

----------- Application groups -----------
  App root: /usr/share/foreman
  Requests in queue: 100
  * PID: 4153    Sessions: 1       Processed: 342     Uptime: 40h 14m 12s
    CPU: 27%     Memory  : 281M    Last used: 40h 11m
  * PID: 12493   Sessions: 1       Processed: 6012    Uptime: 39h 49m 49s
    CPU: 25%     Memory  : 165M    Last used: 36h 23m
  * PID: 12504   Sessions: 1       Processed: 11161   Uptime: 39h 49m 48s
    CPU: 24%     Memory  : 165M    Last used: 35h 6m
  * PID: 12517   Sessions: 1       Processed: 803     Uptime: 39h 49m 47s
    CPU: 27%     Memory  : 162M    Last used: 39h 24m

  App root: /etc/puppet/rack
  Requests in queue: 0
  * PID: 2698    Sessions: 0       Processed: 43738   Uptime: 40h 15m 59s
    CPU: 3%      Memory  : 148M    Last used: 4s ago
  * PID: 2702    Sessions: 0       Processed: 44641   Uptime: 40h 15m 59s
    CPU: 3%      Memory  : 147M    Last used: 9s ago

Expected results:

  Requests in queue: lesser than 100 no error message. 

Logged in

Additional info:
Comment 6 Stefan Nemeth 2016-03-07 02:39:26 EST
CSM Comment: Update for customer? Release date required and focus is specifically on the CONTAINERS. **Please update customer, they are beginning to get very frustrated and considering dropping Satellite for an alternative product from another vendor.**

Hi there, 

Would it please be possible to give customer an update as there's been nothing customer facing for nearly a month now.

From our last call on 17.2.16 with customer the update from them was: Can container management updates be made more usable (from customer)? Question needs answering. Case needs immediate focus and update facing update relating to question please.

Many thanks, 


Customer Success Manager
Red Hat Customer Experience & Engagement, EMEA 
Red Hat UK Ltd, 200 Fowler Avenue, Farnborough Business Park, Farnborough, Hampshire, GU14 7JP. 
Email: psteele@redhat.com |Mobile: +44 7917186036| Desk: +44 1252 362734
Comment 8 Daniel Lobato Garcia 2016-03-31 02:37:49 EDT
Created redmine issue http://projects.theforeman.org/issues/14412 from this bug
Comment 9 Daniel Lobato Garcia 2016-04-04 02:31:48 EDT
Stefan, I don't see how container management has anything to do with this bug?

I'm closing as a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1277085 - which exhibited similar logs, and is fixed via the latest ldap_fluff. Please reopen if you still see this issue with ldap_fluff 0.4.1

*** This bug has been marked as a duplicate of bug 1277085 ***

Note You need to log in before you can comment on or make changes to this bug.