Bug 1302721
| Summary: | "ceph-deploy calamari connect" is not installing diamond package also failing to start salt-minion service | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Vikhyat Umrao <vumrao> |
| Component: | Ceph-Installer | Assignee: | Andrew Schoen <aschoen> |
| Status: | CLOSED ERRATA | QA Contact: | Tejas <tchandra> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 1.3.1 | CC: | adeza, aschoen, bhubbard, ceph-eng-bugs, flucifre, gmeno, hnallurv, icolle, kdreyer, nthomas, sankarshan, shmohan, tchandra, tserlin, vikumar |
| Target Milestone: | rc | ||
| Target Release: | 1.3.3 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | RHEL: ceph-deploy-1.5.36-1.el7cp Ubuntu: ceph-deploy_1.5.36-2redhat1 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2016-09-29 12:56:14 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1348597 | ||
|
Description
Vikhyat Umrao
2016-01-28 13:01:28 UTC
Tried this on RHEL7.2 with 0.94.5-15.el7cp.x86_64. Diamond packages are properly installed and salt-minions services are running but failed to start diamond process hence calamari was not generating the graphs. Followed the workaround from https://bugzilla.redhat.com/show_bug.cgi?id=1310829 and diamond process started , everything works fine. Hence movinf this bug back to assigned. my concern is that https://github.com/ceph/calamari/blob/master/salt/srv/salt/diamond.sls should be taking care of this when ceph-deploy calamari connect runs so that upstream patch to ceph-deploy is evidence that we need to investigate why that salt in calamari isn't running successfully Andrew will be reproducing, if we already have that please share the details I've opened a PR upstream to address this: https://github.com/ceph/calamari/pull/488 I think you mean accept the salt-minion keys AFTER running ceph-deploy calamari connect. That is the whole point of this command -- to install salt-minion and configure it to know where calamari is running. AFTER running it go to the calamari web UI and accept the new nodes in the manage tab. (In reply to Gregory Meno from comment #18) > I think you mean accept the salt-minion keys AFTER running ceph-deploy > calamari connect. Yes, sorry. You'll want to run ceph-deploy calamari connect and then accept the new nodes in the web UI. The salt provided by calamari will then install diamond and start it. Perhaps this ticket could be resolved by updating the docs with what to expect from the ceph-deploy calamari connect command and detailing what needs to happen after that command? If these docs already exist, I've not been able to find them. Hi Andrew, Greg,
I reran the calamari connect with the steps as mentioned by Andrew.
1. yum remove diamond and salt on all the nodes.
2. Ran calamari connect on the master.
3. accepted the keys on the WebGUI.
The status now was:
Diamond was not started on any node, salt-minion was started.
Subject: Unit diamond.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit diamond.service has begun starting up.
Sep 14 04:47:10 magna105 diamond[6680]: Starting diamond: ERROR: Config file: /etc/diamond/diamond.conf does not exist.
Sep 14 04:47:10 magna105 diamond[6680]: Usage: diamond [options]
Sep 14 04:47:10 magna105 diamond[6680]: Options:
Sep 14 04:47:10 magna105 diamond[6680]: -h, --help show this help message and exit
Sep 14 04:47:10 magna105 diamond[6680]: -c CONFIGFILE, --configfile=CONFIGFILE
Sep 14 04:47:10 magna105 diamond[6680]: config file
Sep 14 04:47:10 magna105 diamond[6680]: -f, --foreground run in foreground
Sep 14 04:47:10 magna105 diamond[6680]: -l, --log-stdout log to stdout
Sep 14 04:47:10 magna105 diamond[6680]: -p PIDFILE, --pidfile=PIDFILE
Sep 14 04:47:10 magna105 diamond[6680]: pid file
Sep 14 04:47:10 magna105 diamond[6680]: -r COLLECTOR, --run=COLLECTOR
Sep 14 04:47:10 magna105 diamond[6680]: run a given collector once and exit
Sep 14 04:47:10 magna105 diamond[6680]: -v, --version display the version and exit
Sep 14 04:47:10 magna105 diamond[6680]: --skip-pidfile Skip creating PID file
Sep 14 04:47:10 magna105 diamond[6680]: -u USER, --user=USER Change to specified unprivilegd user
Sep 14 04:47:10 magna105 diamond[6680]: -g GROUP, --group=GROUP
Sep 14 04:47:10 magna105 diamond[6680]: Change to specified unprivilegd group
Sep 14 04:47:10 magna105 diamond[6680]: --skip-change-user Skip changing to an unprivilegd user
Sep 14 04:47:10 magna105 diamond[6680]: --skip-fork Skip forking (damonizing) process
Sep 14 04:47:10 magna105 diamond[6680]: [17B blob data]
Sep 14 04:47:10 magna105 systemd[1]: PID file /var/run/diamond.pid not readable (yet?) after start.
Sep 14 04:47:10 magna105 systemd[1]: Failed to start LSB: System statistics collector for Graphite.
4. Ran the salt '*' highstate on the master node.
Diamond now started.
Does the "/etc/diamond/diamon.conf" file get when the "salt * highstate" command is run?
If so we need to document these sequence of steps.
Thanks,
Tejas
Tejas, Yes, it is the salt provided by calamari-server that ensures diamond is installed, places it's diamond.conf file and starts the diamond service. You should not need to run that manually though, once the salt-minions are installed and the keys are accepted they should take care of it all. I logged onto magna105 and noticed a few things I have questions about. 1) on magna105 I noticed that calamari-server was not installed, did you install calamari-server on all nodes? calamari-server needs to be installed on all nodes, not just the master 2) did you verify with ``salt-key -L`` on the master node that the keys were actually accepted after doing so through the web UI? 3) which nodes is your master node? what other nodes were you using in this test? 4) when removing diamond before this test did you make sure that ``/var/lock/subsys/diamond`` was removed from all nodes as well? 5) what was the exact ceph-deploy calamari connect command that you ran? Another thing to mention is that once the salt minions are connected to the master it will take a minute or so for the minions to respond and get everything installed. Tejas, I've figured out that the nodes you used for this test were: magna104.ceph.redhat.com magna105.ceph.redhat.com magna107.ceph.redhat.com magna108.ceph.redhat.com I'm going to take these nodes today and try to recreate. hi Andrew, I followed a similar set of steps from what you mentioned. However in addition I also did these: 1. yum remove salt on the master. This removes calamari-client, calamari-server, salt, salt-master, salt-minion 2. rm -rf /etc/salt so that all the minion files and the keys are deleted. And then did the same set of steps, worked great. I will be moving this bug to Doc. Thanks, Tejas on further consideration no doc changes are needed. The existing workflow from 1.3.2 should work. Moving the bug to Verified state. Thanks, Tejas Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2016-1972.html |