| Summary: | rhel-osp-director: Unable to create objects on external ceph | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Alexander Chuzhoy <sasha> |
| Component: | rhosp-director | Assignee: | John Fulton <johfulto> |
| Status: | CLOSED NOTABUG | QA Contact: | Yogev Rabl <yrabl> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 10.0 (Newton) | CC: | dbecker, gfidente, johfulto, jschluet, jslagle, kiran, mburns, mcornea, morazi, rhel-osp-director-maint, sasha |
| Target Milestone: | rc | ||
| Target Release: | 10.0 (Newton) | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2016-11-11 19:04:13 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Alexander Chuzhoy
2016-11-09 22:07:04 UTC
giulio, could you take a look at this one? I could not reproduce it on my environment, but given this a very important feature, it looks like we need someone else to test this too. Alex, Would you please update this bug with the following info? 1. Do you happen to have the environment still available for some more troubleshooting? 2. Did you create the following ceph pools on the ceph cluster before the deployment? NovaRbdPoolName: vms CinderRbdPoolName: volumes GlanceRbdPoolName: images GnocchiRbdPoolName: metrics 3. I assume the following correspond to actual the actual FSID, IPs and a real key that was used to create the cluster. CephClusterFSID: '<fsid>' CephClientKey: 'key' CephExternalMonHost: '<IPs>' 4. What version of ceph is the external ceph cluster running? I am going to try to reproduce this as a next step with my env in the meantime. Thanks, John John, 1) the environment isn't available now. Will try to create one for you. 2)yes, the volumes exist on the external ceph. The same ceph setup was used for previous tests (osp8,osp9) 3) yes, the keys correspond. double checked. 4) ceph-common-0.94.5-0.el7.x86_64 ceph-0.94.5-0.el7.x86_64 python-cephfs-0.94.5-0.el7.x86_64 ceph-deploy-1.5.28-0.noarch ceph-radosgw-0.94.5-0.el7.x86_64 (In reply to Alexander Chuzhoy from comment #6) > John, > 1) the environment isn't available now. Will try to create one for you. Thanks. When you recreate please use the following: parameter_defaults: ExtraConfig: ceph::conf::args: client/rbd_default_features: value: "1" As per your answer to #4 you're using a Ceph1.3 server which requires the above flag for backwards compatibility. Such backwards compatibility was not necessary when using the OSP9 image as it shipped a Ceph1.3 client. The root cause here may just be that OSP10 shipped a Ceph2 client so you need to enable the flag so the Ceph2 client can talk to the Ceph1.3 server. As verified by the reporter, using the following Heat env during deployment resolved the issue.
parameter_defaults:
ExtraConfig:
ceph::conf::args:
client/rbd_default_features:
value: "1"
Thus, this is not really a bug. It could be considered a documentation issue, however, the documentation issue is already triaged as per:
https://bugzilla.redhat.com/show_bug.cgi?id=1385034#c9
Thus, I'm closing this BZ.
*** Bug 1395324 has been marked as a duplicate of this bug. *** |