Description of problem:
Following the steps as mentioned in the doc here : https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html-single/installation_guide/index#configuring-a-multisite-ceph-object-gateway-with-multiple-realms_install
Version-Release number of selected component (if applicable):
ceph version 14.2.11-49.el7cp
ansible-2.9.14-1.el7ae.noarch
ceph-ansible-4.0.34-1.el7cp.noarch
After following the steps as mentioned in the doc , by creating a host_vars folder with all realm configuration, facing a failure here :
TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ************************************************************************************************
Tuesday 13 October 2020 09:12:39 -0400 (0:00:00.197) 0:00:26.599 *******
ok: [ceph-tejas-1602577535750-node3-monrgwosd] => (item=0)
ok: [ceph-tejas-1602577535750-node2-monrgwosd] => (item=0)
skipping: [ceph-tejas-1602577535750-node1-monmgrinstallerosd] => (item=0)
fatal: [ceph-tejas-1602577535750-node4-rgw]: FAILED! =>
msg: |-
The task includes an option with an undefined variable. The error was: 'rgw_realm' is undefined
The error appears to be in '/usr/share/ceph-ansible/roles/ceph-facts/tasks/set_radosgw_address.yml': line 60, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: set_fact rgw_instances with rgw multisite
^ here
Need to determine if this is a doc issue , and if so what the correct steps are to configure multiple replicated realms.
Group_vars/all.yml:
ceph_conf_overrides:
client:
rgw crypt require ssl: false
rgw crypt s3 kms encryption keys: testkey-1=YmluCmJvb3N0CmJvb3N0LWJ1aWxkCmNlcGguY29uZgo=
testkey-2=aWIKTWFrZWZpbGUKbWFuCm91dApzcmMKVGVzdGluZwo=
global:
mon_max_pg_per_osd: 1024
osd_default_pool_size: 2
osd_pool_default_pg_num: 64
osd_pool_default_pgp_num: 64
mon:
mon_allow_pool_delete: true
ceph_origin: distro
ceph_repository: rhcs
ceph_stable: true
ceph_stable_release: nautilus
ceph_stable_rh_storage: true
ceph_test: true
cephfs_pools:
- name: cephfs_data
pgs: '8'
- name: cephfs_metadata
pgs: '8'
copy_admin_key: true
dashboard_enabled: false
fetch_directory: ~/fetch
journal_size: 1024
osd_scenario: lvm
public_network: 10.0.100.0/22
rgw_multisite: true