Hide Forgot
rhel-osp-director: Unable to create objects on external ceph Environment: openstack-tripleo-heat-templates-5.0.0-1.4.el7ost.noarch instack-undercloud-5.0.0-4.el7ost.noarch openstack-puppet-modules-9.3.0-1.el7ost.noarch puppet-ceph-2.2.1-3.el7ost.noarch Steps to reproduce: Deployed overcloud with: openstack overcloud deploy --templates --libvirt-type kvm --ntp-server clock.redhat.com --neutron-network-type vxlan --neutron-tunnel-types vxlan --control-scale 3 --control-flavor controller-d75f3dec-c770-5f88-9d4c-3fea1bf9c484 --compute-scale 2 --compute-flavor compute-b634c10a-570f-59ba-bdbf-0c313d745a10 --ceph-storage-scale 0 --ceph-storage-flavor ceph-cf1f074b-dadb-5eb8-9eb0-55828273fab7 -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-ceph-external.yaml -e virt/hostnames.yml -e virt/network/network-environment.yaml /usr/share/openstack-tripleo-heat-templates/environments/puppet-ceph-external.yaml looks as following: # A Heat environment file which can be used to enable the # use of an externally managed Ceph cluster. resource_registry: OS::TripleO::Services::CephExternal: ../puppet/services/ceph-external.yaml OS::TripleO::Services::CephMon: OS::Heat::None OS::TripleO::Services::CephClient: OS::Heat::None OS::TripleO::Services::CephOSD: OS::Heat::None parameter_defaults: # NOTE: These example parameters are required when using CephExternal CephClusterFSID: '<fsid>' CephClientKey: 'key' CephExternalMonHost: '<IPs>' # the following parameters enable Ceph backends for Cinder, Glance, Gnocchi and Nova NovaEnableRbdBackend: true CinderEnableRbdBackend: true CinderBackupBackend: ceph GlanceBackend: rbd GnocchiBackend: rbd # If the Ceph pools which host VMs, Volumes and Images do not match these # names OR the client keyring to use is not named 'openstack', edit the # following as needed. NovaRbdPoolName: vms CinderRbdPoolName: volumes GlanceRbdPoolName: images GnocchiRbdPoolName: metrics CephClientUserName: openstack # finally we disable the Cinder LVM backend CinderEnableIscsiBackend: false # Backward compatibility setting, will be removed in the future CephAdminKey: '' The deployment completed successfully, yet I'm not able to create anything on the storage. cinder create 1., results in: +--------------------------------------+--------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+------+------+-------------+----------+-------------+ | f23f739b-d40d-4aa4-8b71-64c6f2507d6e | error | - | 1 | - | false | | +--------------------------------------+--------+------+------+-------------+----------+-------------+ [root@controller-0 ~]# ceph status 2016-11-09 22:05:59.769688 7f0a70f67700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory 2016-11-09 22:05:59.769697 7f0a70f67700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication 2016-11-09 22:05:59.769698 7f0a70f67700 0 librados: client.admin initialization error (2) No such file or directory Error connecting to cluster: ObjectNotFound
giulio, could you take a look at this one?
I could not reproduce it on my environment, but given this a very important feature, it looks like we need someone else to test this too.
Alex, Would you please update this bug with the following info? 1. Do you happen to have the environment still available for some more troubleshooting? 2. Did you create the following ceph pools on the ceph cluster before the deployment? NovaRbdPoolName: vms CinderRbdPoolName: volumes GlanceRbdPoolName: images GnocchiRbdPoolName: metrics 3. I assume the following correspond to actual the actual FSID, IPs and a real key that was used to create the cluster. CephClusterFSID: '<fsid>' CephClientKey: 'key' CephExternalMonHost: '<IPs>' 4. What version of ceph is the external ceph cluster running? I am going to try to reproduce this as a next step with my env in the meantime. Thanks, John
John, 1) the environment isn't available now. Will try to create one for you. 2)yes, the volumes exist on the external ceph. The same ceph setup was used for previous tests (osp8,osp9) 3) yes, the keys correspond. double checked. 4) ceph-common-0.94.5-0.el7.x86_64 ceph-0.94.5-0.el7.x86_64 python-cephfs-0.94.5-0.el7.x86_64 ceph-deploy-1.5.28-0.noarch ceph-radosgw-0.94.5-0.el7.x86_64
(In reply to Alexander Chuzhoy from comment #6) > John, > 1) the environment isn't available now. Will try to create one for you. Thanks. When you recreate please use the following: parameter_defaults: ExtraConfig: ceph::conf::args: client/rbd_default_features: value: "1" As per your answer to #4 you're using a Ceph1.3 server which requires the above flag for backwards compatibility. Such backwards compatibility was not necessary when using the OSP9 image as it shipped a Ceph1.3 client. The root cause here may just be that OSP10 shipped a Ceph2 client so you need to enable the flag so the Ceph2 client can talk to the Ceph1.3 server.
As verified by the reporter, using the following Heat env during deployment resolved the issue. parameter_defaults: ExtraConfig: ceph::conf::args: client/rbd_default_features: value: "1" Thus, this is not really a bug. It could be considered a documentation issue, however, the documentation issue is already triaged as per: https://bugzilla.redhat.com/show_bug.cgi?id=1385034#c9 Thus, I'm closing this BZ.
*** Bug 1395324 has been marked as a duplicate of this bug. ***