upstream docs had the wrong flag names pr to fix: https://github.com/ceph/ceph/pull/40447 flags will be --realm and --zone once this is merged using --rgw-realm and --rgw-zone dont do anything both flags and positional arguments are supported
Daniel do we need to backport https://github.com/ceph/ceph/pull/40447 to pacific?
Ken it was already back ported in this pr from sage https://github.com/ceph/ceph/pull/40437
Ok, thanks. That PR is in the v16.2.0 upstream tag, so it will be in today's downstream rebase.
these flags --rgw-zone , --rgw-realm in addition to all the flags in this file https://github.com/ceph/ceph/blob/47d00f6503c721932547533a0ada89c221987736/src/common/legacy_config_opts.h are accepted to every single ceph,ceph-mon,ceph-mgr,radosgw-admin,..... command it's the way the ceph CLI works basically any config option can be specified via the CLI for any ceph daemon/executable examples: [ceph: root@vm-00 /]# ceph -s --pid-file t cluster: id: 827aca78-97e0-11eb-b38a-525400a09620 health: HEALTH_OK services: mon: 3 daemons, quorum vm-00,vm-01,vm-02 (age 19m) mgr: vm-00.hehsce(active, since 22m), standbys: vm-02.bxtxvm osd: 3 osds: 3 up (since 19m), 3 in (since 19m) data: pools: 1 pools, 128 pgs objects: 0 objects, 0 B usage: 18 MiB used, 450 GiB / 450 GiB avail pgs: 128 active+clean [ceph: root@vm-00 /]# ceph orch host ls --mempool_debug HOST ADDR LABELS STATUS vm-00 192.168.122.28 vm-01 vm-01 vm-02 vm-02 [ceph: root@vm-00 /]# ceph config-key ls --bluestore_debug_inject_bug21040 [ "config-history/10/", "config-history/10/+mgr/mgr/dashboard/ssl_server_port", "config-history/11/", "config-history/11/+mgr/mgr/dashboard/GRAFANA_API_SSL_VERIFY", "config-history/12/", "config-history/12/+mgr/mgr/dashboard/ALERTMANAGER_API_HOST", "config-history/13/", "config-history/13/+mgr/mgr/dashboard/PROMETHEUS_API_HOST", "config-history/14/", "config-history/14/+mgr/mgr/dashboard/GRAFANA_API_URL", "config-history/2/", "config-history/2/+global/container_image", "config-history/3/", "config-history/3/+mon/public_network", "config-history/4/", "config-history/4/+mgr/mgr/cephadm/migration_current", "config-history/5/", "config-history/5/+mgr/mgr/cephadm/migration_current", "config-history/5/-mgr/mgr/cephadm/migration_current", "config-history/6/", "config-history/6/+mgr/mgr/cephadm/migration_current", "config-history/6/-mgr/mgr/cephadm/migration_current", "config-history/7/", "config-history/7/+mgr/mgr/orchestrator/orchestrator", "config-history/8/", "config-history/8/+global/container_image", "config-history/8/-global/container_image", "config-history/9/", "config-history/9/+mgr/mgr/cephadm/container_init", "config/global/container_image", "config/mgr/mgr/cephadm/container_init", "config/mgr/mgr/cephadm/migration_current", "config/mgr/mgr/dashboard/ALERTMANAGER_API_HOST", "config/mgr/mgr/dashboard/GRAFANA_API_SSL_VERIFY", "config/mgr/mgr/dashboard/GRAFANA_API_URL", "config/mgr/mgr/dashboard/PROMETHEUS_API_HOST", "config/mgr/mgr/dashboard/ssl_server_port", "config/mgr/mgr/orchestrator/orchestrator", "config/mon/public_network", "mgr/cephadm/config_checks", "mgr/cephadm/grafana_crt", "mgr/cephadm/grafana_key", "mgr/cephadm/host.vm-00", "mgr/cephadm/host.vm-01", "mgr/cephadm/host.vm-02", "mgr/cephadm/inventory", "mgr/cephadm/osd_remove_queue", "mgr/cephadm/spec.alertmanager", "mgr/cephadm/spec.crash", "mgr/cephadm/spec.grafana", "mgr/cephadm/spec.mgr", "mgr/cephadm/spec.mon", "mgr/cephadm/spec.node-exporter", "mgr/cephadm/spec.osd.all-available-devices", "mgr/cephadm/spec.prometheus", "mgr/cephadm/ssh_identity_key", "mgr/cephadm/ssh_identity_pub", "mgr/cephadm/ssh_user", "mgr/dashboard/accessdb_v2", "mgr/dashboard/crt", "mgr/dashboard/key", "mgr/devicehealth/last_scrape", "mgr/progress/completed", "mgr/telemetry/report_id", "mgr/telemetry/salt" ] the problem here was the documentation was wrong thats been fixed and verified if these flags being accepted here is a problem, there is a much bigger problem outside the scope of cephadm
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294