| Summary: | Installation will configure itself to use 1 thin process if only one processor | ||
|---|---|---|---|
| Product: | Red Hat Satellite | Reporter: | Justin Sherrill <jsherril> |
| Component: | Packaging | Assignee: | Ohad Levy <ohadlevy> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Katello QA List <katello-qa-list> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 6.0.0 | CC: | cperry, cwelton, jpazdziora, lzap, mmccune, ohadlevy |
| Target Milestone: | Unspecified | Keywords: | Triaged |
| Target Release: | Unused | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2012-08-22 18:00:57 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Bug Depends On: | |||
| Bug Blocks: | 747354 | ||
|
Description
Justin Sherrill
2011-10-12 22:16:51 UTC
Associated thread (or of relevance) https://www.redhat.com/archives/katello-devel/2011-October/msg00078.html Take a peek at:
puppet/modules/katello/templates/etc/httpd/conf.d/katello.conf.erb
<Proxy balancer://thinservers>
<%- (processorcount +1).to_i.times do |i| -%>
<%= "BalancerMember http://127.0.0.1:#{scope.lookupvar('katello::params::thin_start_port').to_i + i}/katello" %>
<%- end -%>
</Proxy>
it attempts to add the # of thin servers based on the # of processors
an example of a 3 instance thin config:
<Proxy balancer://thinservers>
BalancerMember http://127.0.0.1:5000/katello
BalancerMember http://127.0.0.1:5001/katello
BalancerMember http://127.0.0.1:5002/katello
</Proxy>
[root@dhcp77-213 ~]# rpm -q facter
facter-1.6.1-1.el6.noarch
[root@dhcp77-213 ~]# facter puppet processorcount
processorcount => 1
puppet =>
[root@dhcp77-213 ~]# grep server /etc/katello/thin.yml
servers: 1
[root@dhcp77-213 ~]# grep BalancerMember /etc/httpd/conf.d/katello.conf
BalancerMember http://127.0.0.1:5000/katello
[root@dhcp77-213 ~]# grep -ir processorcount /usr/share/katello/install
/usr/share/katello/install/puppet/modules/katello/templates/etc/httpd/conf.d/katello.conf.erb: <%- (processorcount +1).to_i.times do |i| -%>
/usr/share/katello/install/puppet/modules/katello/templates/etc/katello/thin.yml.erb:servers: <%= processorcount +1 %>
[root@dhcp77-213 ~]#
I suspect that somehow we are either not initializing puppet facts during installer... OR we are hitting some variant of this bug:
Processorcount is zero
https://projects.puppetlabs.com/issues/2945
Ohad, are you familiar with this to debug/know what would be causing this. The code (I assume) is correct, it says to take processorcount and add 1, as such, thin.yml *should* list 2 servers, and we have two lines within katello.conf apache config.
Cliff
Background - the desired end result from this bug is that we configure the system to be processorcount + 1 So: a 1 CPU system has 2 entries in katello.conf a 2 CPU system has 3 entries in katello.conf a 4 CPU system has 5 entries in katello.conf a 16 CPU system has 17 entries in katello.conf for BalancerMember line... as noted in example of comment #3 which gives example for a 2 CPU system. The same rules is for the 'servers:' line within thin.yml config. Cliff can you try: <%- (processorcount.to_i +1).times do |i| -%> PROPOSED <%- (processorcount.to_i +1).times do |i| -%>
CURRENT <%- (processorcount +1).to_i.times do |i| -%>
[root@dhcp77-193 ~]# vi /usr/share/katello/install/puppet/modules/katello/templates/etc/httpd/conf.d/katello.conf.erb
[root@dhcp77-193 ~]#
[root@dhcp77-193 ~]# katello-configure
Starting Katello configuration
The top-level log file is [/var/log/katello/katello-configure-20111026-092651/main.log]
Failed to parse template katello/etc/httpd/conf.d/katello.conf.erb: illegal radix 1 at /usr/share/katello/install/puppet/modules/katello/manifests/config.pp:55 on node dhcp77-193.rhndev.redhat.com
[root@dhcp77-193 ~]#
[root@dhcp77-193 ~]# grep -C 4 processorcount /usr/share/katello/install/puppet/modules/katello/templates/etc/httpd/conf.d/katello.conf.erb
Timeout 5400
ProxyTimeout 5400
<Proxy balancer://thinservers>
<%- (processorcount.to_i +1).times do |i| -%>
<%= "BalancerMember http://127.0.0.1:#{scope.lookupvar('katello::params::thin_start_port').to_i + i}/katello" %>
<%- end -%>
</Proxy>
[root@dhcp77-193 ~]#
What is the best debug way to print out processorcount value to confirm if it is initialized or not?
Cliff
you can add simply do
file{"/tmp/fact": content => $processorcount }
FILE puppet/modules/katello/templates/etc/httpd/conf.d/katello.conf.erb :
<Proxy balancer://thinservers>
<%- (processorcount.to_i + 1).to_i.times do |i| -%>
<%= "BalancerMember http://127.0.0.1:#{scope.lookupvar('katello::params::thin_start_port').to_i + i}/katello" %>
<%- end -%>
</Proxy>
FILE puppet/modules/katello/templates/etc/katello/thin.yml.erb :
servers: <%= processorcount.to_i + 1 %>
[root@dhcp77-213 ~]# katello-configure
Starting Katello configuration
The top-level log file is [/var/log/katello/katello-configure-20111026-101832/main.log]
[root@dhcp77-213 ~]# grep BalancerMember /etc/httpd/conf.d/katello.conf
BalancerMember http://127.0.0.1:5000/katello
BalancerMember http://127.0.0.1:5001/katello
[root@dhcp77-213 ~]#
[root@dhcp77-213 ~]# more /etc/katello/thin.yml
---
pid: tmp/pids/thin.pid
address: 0.0.0.0
wait: 30
timeout: 30
port: 5000
log: /var/log/katello/thin-log.log
max_conns: 1024
require: []
environment: production
max_persistent_conns: 512
servers: 2
daemonize: yes
chdir: /usr/share/katello
[root@dhcp77-213 ~]#
This also worked. Ohad is committing fix for this bug. <%- (processorcount.to_i + 1).times do |i| -%> commited at b94a20080 Guys, I already pushed a patch that uses processorcount + 1, but ONLY AND ONLY if there is enough memory. https://bugzilla.redhat.com/show_bug.cgi?id=749495 Note to QAs: You can discard this one and verify #749495. Marking this bug verified/closed per bug #749495. Also note docs bug #795873 that arose from this. |