Bug 1700538 - generated openstack keyring not working on rhcs4/osp15 composes (admin keyring does work)
Summary: generated openstack keyring not working on rhcs4/osp15 composes (admin keyrin...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 4.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: 4.0
Assignee: Guillaume Abrioux
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks: 1594251
TreeView+ depends on / blocked
 
Reported: 2019-04-16 20:10 UTC by John Fulton
Modified: 2022-02-21 17:55 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-04-17 16:59:42 UTC
Embargoed:


Attachments (Terms of Use)
output from 'rbd --debug-ms 10 -c /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring -p vms ls -l' (70.39 KB, text/plain)
2019-04-17 15:31 UTC, John Fulton
no flags Details
output of 'rbd --debug-ms 10 -c /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.admin.keyring' (63.23 KB, text/plain)
2019-04-17 15:46 UTC, John Fulton
no flags Details
ceph-ansible depoyment log with -vvv (3.29 MB, application/gzip)
2019-04-17 16:01 UTC, John Fulton
no flags Details

Description John Fulton 2019-04-16 20:10:03 UTC
While testing the RHCS4 composes [1] against the OSP15 composes on RHEL8 the Ceph deployment completes but the OpenStack keyring which is generated by default [2] does not work [3] in that a simple command like the following times out with "monclient(hunting): authenticate timed out after 300":

 rbd -c /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring -p vms ls -l

However if I replace [4] the content of the ceph.client.openstack.keyring with the content of the ceph.client.admin.keyring on all nodes then it works!



[1] 
ceph-ansible-4.0.0-0.beta1.95.g6d506dba.el8cp.noarch
docker-registry.engineering.redhat.com/ceph/rhceph-4-rhel8:latest 7678fc9d3643

[2] 
[root@overcloud-controller-0 ceph]# cat ceph.client.openstack.keyring
[client.openstack]
        key = AQDH1LVcAAAAABAAqLWKC0cdZJdLHCphocW9Zg==
        caps mgr = "allow *"
        caps mon = "profile rbd"
        caps osd = "profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=vms, profile rbd pool=images, profile rbd pool=metrics"
[root@overcloud-controller-0 ceph]# 

[3] http://paste.openstack.org/show/749262

[4] workaround
# ensure ceph.client.admin.keyring is in /etc/ceph/ on all nodes
cp -a ceph.client.openstack.keyring ceph.client.openstack.keyring.bak
cat ceph.client.admin.keyring > ceph.client.openstack.keyring

Comment 2 John Fulton 2019-04-17 15:31:01 UTC
Created attachment 1555943 [details]
output from 'rbd --debug-ms 10 -c /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring -p vms ls -l'

Comment 3 John Fulton 2019-04-17 15:46:26 UTC
Created attachment 1555944 [details]
output of 'rbd --debug-ms 10 -c /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.admin.keyring'

It might be useful to compare these two attachments to see why the admin key is working but the openstack key is not.

Comment 4 John Fulton 2019-04-17 16:01:42 UTC
Created attachment 1555949 [details]
ceph-ansible depoyment log with -vvv

Comment 5 Guillaume Abrioux 2019-04-17 16:09:41 UTC
adding `-n client.openstack` makes that command working, eg:

bash-4.4# rbd --debug-ms 1  -c /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.openstack.keyring -n client.openstack -p vms ls 
2019-04-17 16:09:02.574 7f832484d040  1  Processor -- start
2019-04-17 16:09:02.575 7f832484d040  1 --  start start
2019-04-17 16:09:02.575 7f832484d040  1 --2-  >> [v2:172.17.3.73:3300/0,v1:172.17.3.73:6789/0] conn(0x5639f475a8d0 0x5639f475ad10 unknown :-1 s=NONE pgs=0 cs=0 l=0 rx=0 tx=0).connect
2019-04-17 16:09:02.575 7f832484d040  1 --  --> [v2:172.17.3.73:3300/0,v1:172.17.3.73:6789/0] -- mon_getmap magic: 0 v1 -- 0x5639f45c0fc0 con 0x5639f475a8d0
2019-04-17 16:09:02.575 7f8312883700  1 --2-  >> [v2:172.17.3.73:3300/0,v1:172.17.3.73:6789/0] conn(0x5639f475a8d0 0x5639f475ad10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rx=0 tx=0)._handle_peer_banner_payload supported=0 required=0
2019-04-17 16:09:02.576 7f8312883700  1 --  --> [v2:172.17.3.73:3300/0,v1:172.17.3.73:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x5639f4654f30 con 0x5639f475a8d0
2019-04-17 16:09:02.576 7f8312883700  1 --2-  >> [v2:172.17.3.73:3300/0,v1:172.17.3.73:6789/0] conn(0x5639f475a8d0 0x5639f475ad10 crc :-1 s=SESSION_CONNECTING pgs=0 cs=0 l=0 rx=0 tx=0).send_client_ident getsockname reveals I am 172.17.3.73:35344 when talking to v2:172.17.3.73:3300/0
2019-04-17 16:09:02.576 7f8312883700  1 -- 172.17.3.73:0/1458310170 learned_addr learned my addr 172.17.3.73:0/1458310170 (peer_addr_for_me v2:172.17.3.73:0/0)
2019-04-17 16:09:02.576 7f8312883700  1 --2- 172.17.3.73:0/1458310170 >> [v2:172.17.3.73:3300/0,v1:172.17.3.73:6789/0] conn(0x5639f475a8d0 0x5639f475ad10 crc :-1 s=READY pgs=2406 cs=0 l=1 rx=0 tx=0).ready entity=mon.0 client_cookie=9c6ac40a59307686 server_cookie=0 in_seq=0 out_seq=0
2019-04-17 16:09:02.576 7f8311881700  1 -- 172.17.3.73:0/1458310170 <== mon.0 v2:172.17.3.73:3300/0 1 ==== mon_map magic: 0 v1 ==== 244+0+0 (crc 0 0 0) 0x5639f4654f30 con 0x5639f475a8d0
2019-04-17 16:09:02.577 7f8311881700  1 -- 172.17.3.73:0/1458310170 <== mon.0 v2:172.17.3.73:3300/0 2 ==== config(0 keys) v1 ==== 4+0+0 (crc 0 0 0) 0x7f8304002570 con 0x5639f475a8d0
2019-04-17 16:09:02.577 7f8311881700  1 -- 172.17.3.73:0/1458310170 <== mon.0 v2:172.17.3.73:3300/0 3 ==== mon_map magic: 0 v1 ==== 244+0+0 (crc 0 0 0) 0x7f8304002970 con 0x5639f475a8d0
2019-04-17 16:09:02.577 7f832484d040  1 -- 172.17.3.73:0/1458310170 >> [v2:172.17.3.73:3300/0,v1:172.17.3.73:6789/0] conn(0x5639f475a8d0 msgr2=0x5639f475ad10 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down
2019-04-17 16:09:02.577 7f832484d040  1 --2- 172.17.3.73:0/1458310170 >> [v2:172.17.3.73:3300/0,v1:172.17.3.73:6789/0] conn(0x5639f475a8d0 0x5639f475ad10 crc :-1 s=READY pgs=2406 cs=0 l=1 rx=0 tx=0).stop
2019-04-17 16:09:02.578 7f832484d040  1 -- 172.17.3.73:0/1458310170 shutdown_connections 
2019-04-17 16:09:02.578 7f832484d040  1 --2- 172.17.3.73:0/1458310170 >> [v2:172.17.3.73:3300/0,v1:172.17.3.73:6789/0] conn(0x5639f475a8d0 0x5639f475ad10 unknown :-1 s=CLOSED pgs=2406 cs=0 l=1 rx=0 tx=0).stop
2019-04-17 16:09:02.578 7f832484d040  1 -- 172.17.3.73:0/1458310170 shutdown_connections 
2019-04-17 16:09:02.578 7f832484d040  1 -- 172.17.3.73:0/1458310170 wait complete.
2019-04-17 16:09:02.578 7f832484d040  1 -- 172.17.3.73:0/1458310170 >> 172.17.3.73:0/1458310170 conn(0x5639f46cb700 msgr2=0x5639f46cbb40 unknown :-1 s=STATE_NONE l=0).mark_down
2019-04-17 16:09:02.578 7f832484d040  1  Processor -- start
2019-04-17 16:09:02.578 7f832484d040  1 --  start start
2019-04-17 16:09:02.579 7f832484d040  1 --2-  >> [v2:172.17.3.73:3300/0,v1:172.17.3.73:6789/0] conn(0x5639f475efb0 0x5639f475f3f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rx=0 tx=0).connect
2019-04-17 16:09:02.579 7f832484d040  1 --  --> [v2:172.17.3.73:3300/0,v1:172.17.3.73:6789/0] -- mon_getmap magic: 0 v1 -- 0x5639f4581580 con 0x5639f475efb0
2019-04-17 16:09:02.579 7f8312883700  1 --2-  >> [v2:172.17.3.73:3300/0,v1:172.17.3.73:6789/0] conn(0x5639f475efb0 0x5639f475f3f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner_payload supported=0 required=0
2019-04-17 16:09:02.580 7f8312883700  1 --  --> [v2:172.17.3.73:3300/0,v1:172.17.3.73:6789/0] -- mon_subscribe({config=0+,monmap=0+}) v3 -- 0x5639f475f940 con 0x5639f475efb0
2019-04-17 16:09:02.580 7f8312883700  1 --2-  >> [v2:172.17.3.73:3300/0,v1:172.17.3.73:6789/0] conn(0x5639f475efb0 0x5639f475f3f0 crc :-1 s=SESSION_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0).send_client_ident getsockname reveals I am 172.17.3.73:35346 when talking to v2:172.17.3.73:3300/0
2019-04-17 16:09:02.580 7f8312883700  1 -- 172.17.3.73:0/611322385 learned_addr learned my addr 172.17.3.73:0/611322385 (peer_addr_for_me v2:172.17.3.73:0/0)
2019-04-17 16:09:02.580 7f8312883700  1 --2- 172.17.3.73:0/611322385 >> [v2:172.17.3.73:3300/0,v1:172.17.3.73:6789/0] conn(0x5639f475efb0 0x5639f475f3f0 crc :-1 s=READY pgs=2407 cs=0 l=1 rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0
2019-04-17 16:09:02.580 7f8311080700  1 -- 172.17.3.73:0/611322385 <== mon.0 v2:172.17.3.73:3300/0 1 ==== mon_map magic: 0 v1 ==== 244+0+0 (crc 0 0 0) 0x5639f475f940 con 0x5639f475efb0
2019-04-17 16:09:02.580 7f8311080700  1 -- 172.17.3.73:0/611322385 <== mon.0 v2:172.17.3.73:3300/0 2 ==== config(0 keys) v1 ==== 4+0+0 (crc 0 0 0) 0x7f830400e490 con 0x5639f475efb0
2019-04-17 16:09:02.580 7f8311080700  1 -- 172.17.3.73:0/611322385 <== mon.0 v2:172.17.3.73:3300/0 3 ==== mon_map magic: 0 v1 ==== 244+0+0 (crc 0 0 0) 0x7f8304004560 con 0x5639f475efb0
2019-04-17 16:09:02.580 7f832484d040  1 -- 172.17.3.73:0/611322385 --> [v2:172.17.3.73:3300/0,v1:172.17.3.73:6789/0] -- mon_subscribe({mgrmap=0+}) v3 -- 0x5639f4761030 con 0x5639f475efb0
2019-04-17 16:09:02.580 7f832484d040  1 -- 172.17.3.73:0/611322385 --> [v2:172.17.3.73:3300/0,v1:172.17.3.73:6789/0] -- mon_subscribe({osdmap=0}) v3 -- 0x5639f4761600 con 0x5639f475efb0
2019-04-17 16:09:02.581 7f8311080700  1 -- 172.17.3.73:0/611322385 <== mon.0 v2:172.17.3.73:3300/0 4 ==== mgrmap(e 3) v1 ==== 3378+0+0 (crc 0 0 0) 0x7f83040011d0 con 0x5639f475efb0
2019-04-17 16:09:02.581 7f8311080700  1 --2- 172.17.3.73:0/611322385 >> [v2:172.17.3.73:6800/89,v1:172.17.3.73:6801/89] conn(0x7f8300003db0 0x7f83000062d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rx=0 tx=0).connect
2019-04-17 16:09:02.581 7f8311080700  1 -- 172.17.3.73:0/611322385 <== mon.0 v2:172.17.3.73:3300/0 5 ==== osd_map(20..20 src has 1..20) v4 ==== 3090+0+0 (crc 0 0 0) 0x7f83040085b0 con 0x5639f475efb0
2019-04-17 16:09:02.581 7f832484d040  1 --2- 172.17.3.73:0/611322385 >> [v2:172.17.3.103:6800/20073,v1:172.17.3.103:6801/20073] conn(0x5639f4764b30 0x5639f4766f80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rx=0 tx=0).connect
2019-04-17 16:09:02.581 7f832484d040  1 -- 172.17.3.73:0/611322385 --> [v2:172.17.3.103:6800/20073,v1:172.17.3.103:6801/20073] -- osd_op(unknown.0.0:1 3.1c 3:3831950c:::rbd_directory:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected e20) v8 -- 0x5639f45d3050 con 0x5639f4764b30
2019-04-17 16:09:02.581 7f8312082700  1 --2- 172.17.3.73:0/611322385 >> [v2:172.17.3.73:6800/89,v1:172.17.3.73:6801/89] conn(0x7f8300003db0 0x7f83000062d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner_payload supported=0 required=0
2019-04-17 16:09:02.582 7f8313084700  1 --2- 172.17.3.73:0/611322385 >> [v2:172.17.3.103:6800/20073,v1:172.17.3.103:6801/20073] conn(0x5639f4764b30 0x5639f4766f80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rx=0 tx=0)._handle_peer_banner_payload supported=0 required=0
2019-04-17 16:09:02.583 7f8312082700  1 --2- 172.17.3.73:0/611322385 >> [v2:172.17.3.73:6800/89,v1:172.17.3.73:6801/89] conn(0x7f8300003db0 0x7f83000062d0 crc :-1 s=READY pgs=764 cs=0 l=1 rx=0 tx=0).ready entity=mgr.4122 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0
2019-04-17 16:09:02.584 7f8313084700  1 --2- 172.17.3.73:0/611322385 >> [v2:172.17.3.103:6800/20073,v1:172.17.3.103:6801/20073] conn(0x5639f4764b30 0x5639f4766f80 crc :-1 s=READY pgs=258 cs=0 l=1 rx=0 tx=0).ready entity=osd.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0
2019-04-17 16:09:02.585 7f8313084700  1 -- 172.17.3.73:0/611322385 <== osd.0 v2:172.17.3.103:6800/20073 1 ==== osd_op_reply(1 rbd_directory [read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) v8 ==== 157+0+0 (crc 0 0 0) 0x7f830c004120 con 0x5639f4764b30
2019-04-17 16:09:02.585 7f832484d040  1 -- 172.17.3.73:0/611322385 >> [v2:172.17.3.103:6800/20073,v1:172.17.3.103:6801/20073] conn(0x5639f4764b30 msgr2=0x5639f4766f80 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down
2019-04-17 16:09:02.585 7f832484d040  1 --2- 172.17.3.73:0/611322385 >> [v2:172.17.3.103:6800/20073,v1:172.17.3.103:6801/20073] conn(0x5639f4764b30 0x5639f4766f80 crc :-1 s=READY pgs=258 cs=0 l=1 rx=0 tx=0).stop
2019-04-17 16:09:02.585 7f832484d040  1 -- 172.17.3.73:0/611322385 >> [v2:172.17.3.73:6800/89,v1:172.17.3.73:6801/89] conn(0x7f8300003db0 msgr2=0x7f83000062d0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down
2019-04-17 16:09:02.585 7f832484d040  1 --2- 172.17.3.73:0/611322385 >> [v2:172.17.3.73:6800/89,v1:172.17.3.73:6801/89] conn(0x7f8300003db0 0x7f83000062d0 crc :-1 s=READY pgs=764 cs=0 l=1 rx=0 tx=0).stop
2019-04-17 16:09:02.585 7f832484d040  1 -- 172.17.3.73:0/611322385 >> [v2:172.17.3.73:3300/0,v1:172.17.3.73:6789/0] conn(0x5639f475efb0 msgr2=0x5639f475f3f0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down
2019-04-17 16:09:02.585 7f832484d040  1 --2- 172.17.3.73:0/611322385 >> [v2:172.17.3.73:3300/0,v1:172.17.3.73:6789/0] conn(0x5639f475efb0 0x5639f475f3f0 crc :-1 s=READY pgs=2407 cs=0 l=1 rx=0 tx=0).stop
2019-04-17 16:09:02.586 7f832484d040  1 -- 172.17.3.73:0/611322385 shutdown_connections 
2019-04-17 16:09:02.586 7f832484d040  1 --2- 172.17.3.73:0/611322385 >> [v2:172.17.3.103:6800/20073,v1:172.17.3.103:6801/20073] conn(0x5639f4764b30 0x5639f4766f80 unknown :-1 s=CLOSED pgs=258 cs=0 l=1 rx=0 tx=0).stop
2019-04-17 16:09:02.586 7f832484d040  1 --2- 172.17.3.73:0/611322385 >> [v2:172.17.3.73:3300/0,v1:172.17.3.73:6789/0] conn(0x5639f475efb0 0x5639f475f3f0 unknown :-1 s=CLOSED pgs=2407 cs=0 l=1 rx=0 tx=0).stop
2019-04-17 16:09:02.586 7f832484d040  1 --2- 172.17.3.73:0/611322385 >> [v2:172.17.3.73:6800/89,v1:172.17.3.73:6801/89] conn(0x7f8300003db0 0x7f83000062d0 unknown :-1 s=CLOSED pgs=764 cs=0 l=1 rx=0 tx=0).stop
2019-04-17 16:09:02.586 7f832484d040  1 -- 172.17.3.73:0/611322385 shutdown_connections 
2019-04-17 16:09:02.586 7f832484d040  1 -- 172.17.3.73:0/611322385 wait complete.
2019-04-17 16:09:02.586 7f832484d040  1 -- 172.17.3.73:0/611322385 >> 172.17.3.73:0/611322385 conn(0x5639f475a870 msgr2=0x5639f46cb700 unknown :-1 s=STATE_NONE l=0).mark_down
bash-4.4#


Note You need to log in before you can comment on or make changes to this bug.