Description of problem: Calamari's client packages are not installed on the base image for Ceph Monitors and OSD Hosts. This is required for Calamari to function properly. Expected results: Calamari's client packages installed on the base image for Ceph Monitors and OSD Hosts.
For us Ceph monitors are deployed on controllers nodes. For making the calamari connection to OSDs and monitors what user should run the connection process? Is is heat_admin?
Can you list exactly what packages we need to include? Is it just calamari-clients? Is that shipped anywhere other than the calamari repo in CDN?
In order to get Calamari to communicate with the Controller nodes (MONs) and Storage nodes (OSDs) I had to manually create a tar archive file containing the following rpms and copy the archive to the nodes and manually install them: diamond-3.4.67-4.el7cp.noarch.rpm openpgm-5.2.122-3.el7cp.x86_64.rpm python-babel-0.9.6-8.el7.noarch.rpm python-crypto-2.6.1-1.el7cp.x86_64.rpm python-jinja2-2.7.2-2.el7cp.noarch.rpm python-markupsafe-0.11-10.el7.x86_64.rpm python-msgpack-0.4.5-1.el7cp.x86_64.rpm python-zmq-14.3.1-1.el7cp.x86_64.rpm salt-2014.1.5-3.el7cp.noarch.rpm salt-minion-2014.1.5-3.el7cp.noarch.rpm sshpass-1.05-5.el7cp.x86_64.rpm zeromq3-3.2.5-1.el7cp.x86_64.rpm This list includes the packages and their dependencies. Here are the command I used for installation: rpm -ivh diamond-3.4.67-4.el7cp.noarch.rpm rpm -ivh salt-minion-2014.1.5-3.el7cp.noarch.rpm \ salt-2014.1.5-3.el7cp.noarch.rpm \ python-zmq-14.3.1-1.el7cp.x86_64.rpm sshpass-1.05-5.el7cp.x86_64.rpm \ zeromq3-3.2.5-1.el7cp.x86_64.rpm openpgm-5.2.122-3.el7cp.x86_64.rpm
See this WIP commit upstream, explains what's required to integrate Calamari with TripleO https://review.openstack.org/#/c/242267/2/environments/README-calamari.md
Based on a discussion today, this request is meant to reduce the need to add the packages using virt-customize. virt-customize is an option as long as there is an easy path to package updates from CDN. This can be achieved with the proper entitlements and repos enabled on the overcloud nodes. Only action left here is to figure out what those entitlements and repos are.
Repo, rhel-7-server-rhceph-1.3-mon-rpms, provides the salt-minion and diamond packages and dependencies. Below are the steps, I use to customize the images for those with those packages. This assumes one has already satisfied the requirements for using virt-customize by installing libguestfs-tools. virt-customize -a overcloud-full.qcow2 --run-command 'subscription-manager register --username=<account username> --password=<password>' virt-customize -a overcloud-full.qcow2 --run-command 'subscription-manager attach --pool=8a85f9824bfa390f014c4ce815663779' virt-customize -a overcloud-full.qcow2 --run-command 'yum-config-manager --enable rhel-7-server-rhceph-1.3-mon-rpms' virt-customize -a overcloud-full.qcow2 --install diamond virt-customize -a overcloud-full.qcow2 --install salt-minion --selinux-relabel virt-customize -a overcloud-full.qcow2 --run-command 'subscription-manager remove --all' virt-customize -a overcloud-full.qcow2 --run-command 'subscription-manager unregister'
*** Bug 1299093 has been marked as a duplicate of this bug. ***
Does this mean you now have an overcloud image that contains the diamond and salt-minion and required packages? If so, will this be part of Beta9?
Radek, removing doc text for now. We're not fixing this in osp 8. John, No, these packages will not be in the image in OSP 8. To include them, you'll need to use virt-customize as in comment 8.
ACK, thanks Mike.
This bug did not make the OSP 8.0 release. It is being deferred to OSP 10.
Radek, looks like the doc text still got into the release notes for the RC. Can we get it removed?
Oops. Looks like the text was pulled into the doc on March 23 and survived the requires_doc_text-. I'll remove it in git. I can have the RN doc re-uploaded to the Portal. Is there any other place where an update is needed? Thanks!
Did BZ#1309455 make it into the release or should it be pulled from the doc's as well?
Bug 1309455 is in the RN, too. (For those who aren't sure where to look: https://access.redhat.com/documentation/en/red-hat-openstack-platform/version-8/release-notes/ )
The automatic install of USM will include the Calamari component (API), which will be manually deployed in OSP-10, and will have to be baked into the Ceph images in OSP-11. retargeting to 11.
let me phrase this more clearly: in OSP-11, we will need the Calamari API to be part of the base Ceph image. USM itself will run in its own VM.
This BZ is targeted for Liberty. So, expect we need to dup this bug for OSPs up to OSP11. Is overcloud image contain client libraries for Storage Console? If not then we need another BZ for it. Depending whether Ceph 2.0 is supported with OSP9, OSP10, OSP11 then we need these libraries for each one of them.
The Version field says 8.0 -- that's the version it's reported against. The Target Release is where we're currently targeting and that says Ocata/11. This bug is tracking including the client libraries which is being targeted for 11. OSP 9 natively supports Ceph 1.3, so we won't be backporting it to there. OSP 10 is tricky. This is an RFE, and we're past feature freeze. For 10, it likely makes more sense to add these packages using virt-customize.
I believe the expectation for OSP-10 is a manual install procedure for USM. That procedure will include steps to install the Calamari API, also manually. Bug is targeted at 11 to track baking the Calamari API bits into the image at that time.
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days