Bug 816247
Summary: | RHHAv2 needs to handle custom names from luci better | ||
---|---|---|---|
Product: | Red Hat Enterprise MRG | Reporter: | Robert Rati <rrati> |
Component: | condor | Assignee: | Robert Rati <rrati> |
Status: | CLOSED ERRATA | QA Contact: | Tomas Rusnak <trusnak> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | Development | CC: | matt, mkudlej, trusnak, tstclair |
Target Milestone: | 2.2 | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | done | ||
Fixed In Version: | condor-7.6.5-0.15 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2012-09-19 18:26:36 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 751870 |
Description
Robert Rati
2012-04-25 15:45:03 UTC
The cluster-sync-to-store command would take a name provided in the cluster.conf and append name information for the JS/QS. This would result in the QS/JS not starting because the config would be wrong and no SPOOL would be defined. Now, the tools only auto-generate a name if the JS/QS is added on the command line. The cluster-sync-to-store command will use whatever names are in the cluster.conf file. Tracking upstream on: V7_6-branch Custom names added by luci. Wallaby cluster-sync-to-store created following configuration:
# condor_configure_pool -n `hostname` -l -v | grep -i spool
JOB_SERVER.ha_jobserver2.SPOOL = $(SCHEDD.ha_schedd2.SPOOL)
SCHEDD.ha_schedd3.HISTORY = $(SCHEDD.ha_schedd3.SPOOL)/history
SCHEDD.ha_schedd1.HISTORY = $(SCHEDD.ha_schedd1.SPOOL)/history
JOB_SERVER.ha_jobserver3.HISTORY = $(JOB_SERVER.ha_jobserver3.SPOOL)/history
SCHEDD.ha_schedd2.SPOOL = /mnt/ha2
QUERY_SERVER.ha_query3.SPOOL = $(SCHEDD.ha_schedd3.SPOOL)
QUERY_SERVER.ha_query2.SPOOL = $(SCHEDD.ha_schedd2.SPOOL)
JOB_SERVER.ha_jobserver1.HISTORY = $(JOB_SERVER.ha_jobserver1.SPOOL)/history
QUERY_SERVER.ha_query1.SPOOL = $(SCHEDD.ha_schedd1.SPOOL)
QUERY_SERVER.ha_query1.HISTORY = $(QUERY_SERVER.ha_query1.SPOOL)/history
SCHEDD.ha_schedd3.SPOOL = /mnt/ha3
QUERY_SERVER.ha_query3.HISTORY = $(QUERY_SERVER.ha_query3.SPOOL)/history
JOB_SERVER.ha_jobserver3.SPOOL = $(SCHEDD.ha_schedd3.SPOOL)
SCHEDD.ha_schedd2.HISTORY = $(SCHEDD.ha_schedd2.SPOOL)/history
QUERY_SERVER.ha_query2.HISTORY = $(QUERY_SERVER.ha_query2.SPOOL)/history
JOB_SERVER.ha_jobserver1.SPOOL = $(SCHEDD.ha_schedd1.SPOOL)
JOB_SERVER.ha_jobserver2.HISTORY = $(JOB_SERVER.ha_jobserver2.SPOOL)/history
SCHEDD.ha_schedd1.SPOOL = /mnt/ha1
Custom names from cluster.conf stay untouched in the configuration. JS/QS were started correctly:
# ps ax | grep condor
1726 ? Ssl 0:20 condor_master -pidfile /var/run/condor/condor_master.pid
1746 ? Ssl 0:15 condor_collector -f
1749 ? Ssl 0:07 condor_startd -f
1750 ? Ssl 0:14 condor_negotiator -f
1751 ? Ssl 0:59 /usr/bin/python /usr/sbin/condor_configd
6184 ? S<l 0:00 condor_schedd -pidfile /var/run/condor/condor_schedd-ha_schedd1.pid -local-name ha_schedd1
6196 ? S< 0:00 condor_procd -A /var/run/condor/procd_pipe.ha_schedd1.SCHEDD -R 10000000 -S 60 -C 64
6226 ? S<l 0:00 condor_job_server -pidfile /var/run/condor/condor_job_server-ha_jobserver1.pid -local-name ha_jobserver1
6282 ? S< 0:00 aviary_query_server -pidfile /var/run/condor/aviary_query_server-ha_query1.pid -local-name ha_query1
6302 pts/0 S+ 0:00 grep condor
>>> VERIFIED
|