Bug 1127701
Summary: | cinder backup with ceph backend lacks the input of the ceph user's keyring file | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Yogev Rabl <yrabl> |
Component: | openstack-foreman-installer | Assignee: | Crag Wolfe <cwolfe> |
Status: | CLOSED EOL | QA Contact: | Shai Revivo <srevivo> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 5.0 (RHEL 7) | CC: | cwolfe, mburns, morazi, nlevine, rhos-maint, srevivo, tshefi |
Target Milestone: | --- | Keywords: | Reopened, ZStream |
Target Release: | Installer | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-09-29 13:36:26 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Yogev Rabl
2014-08-07 11:26:25 UTC
Right now, staypuft/astapor does not configure *anything* under /etc/ceph (like keyrings). There are RFE's filed for puppet management of that directory and its contents. Additionally, staypuft/astapor does not configure the cinder-backup service. Closing based on the above. ceph configuration is external to staypuft at the current time I've been asked to check staypuft's cinder and glance with Ceph backend. Ceph option added on current release great but still missing Ceph parameters. Hence can't check staypuft's configuration of Ceph for Cinder\Glance, adding blocker. Version: rhel-osp-installer-0.1.9-1.el6ost.noarch foreman-installer-1.5.0-0.6.RC2.el6ost.noarch openstack-foreman-installer-2.0.18-1.el6ost.noarch All ceph parameters for cinder and glance are defaulted and available in the advanced configuration. There are not required parameters to be exposed in the standard wizard. As stated in comment 1, cinder-backup is not configured/supported at this time. Can we get doc_text for the present use of ceph-deploy to populate config values? I'm not sure how to do this. Passing on the needinfo to Neil. When I added a compute node (which is not a ceph mon node), I used "ceph deploy install" to install the needed packages (we won't need to do this anymore when the astapor puppet manifests install the packages; the PR for that is already merged). Then, I copied/rsync'ed the entirety of the /etc/ceph/ dir from one of the existing ceph cluster nodes. I couldn't see from the documentation of ceph-deploy how to copy over the needed config files to be sure that /etc/ceph/ceph.conf, ceph.client.images.keyring, and ceph.client.volumes.keyring were created on the compute node. Mike, I disagree, the Ceph basic configurations for Glance and Cinder require these parameters. Administrators would need to know what they need to provide them in order to work with Ceph, without additional post installation configurations. (In reply to Yogev Rabl from comment #8) > Mike, I disagree, the Ceph basic configurations for Glance and Cinder > require these parameters. Administrators would need to know what they need > to provide them in order to work with Ceph, without additional post > installation configurations. Ceph configuration supported in this release is very limited. All other configuration that is needed outside of what is available in advanced configuration must be done separate from RHEL-OSP Installer. Just saw that this is still open. As Puppet is creating the client keyrings and ceph.conf, the assupmtion was that Puppet would be responsible for copying these to the compute nodes as well as the controller. Can you confirm this is what has been implemented for A2? N Crag, can you confirm this is all set in A2? Confirmed that puppet creates the client images and volumes keyring files on the controller and compute nodes. Note this has nothing to do the with the subject of this BZ, cinder-backup, which we are not configuring. Closing list of bugs for RHEL OSP Installer since its support cycle has already ended [0]. If there is some bug closed by mistake, feel free to re-open. For new deployments, please, use RHOSP director (starting with version 7). -- Jaromir Coufal -- Sr. Product Manager -- Red Hat OpenStack Platform [0] https://access.redhat.com/support/policy/updates/openstack/platform |