Bug 1942013 - RGW apply cmd options should be updated
Summary: RGW apply cmd options should be updated
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 5.0
Assignee: Daniel Pivonka
QA Contact: Sunil Kumar Nagaraju
Karen Norteman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-03-23 12:34 UTC by Sunil Kumar Nagaraju
Modified: 2021-08-30 08:29 UTC (History)
3 users (show)

Fixed In Version: ceph-16.2.0-1.el8cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-30 08:29:10 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-1194 0 None None None 2021-08-30 00:16:13 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:29:19 UTC

Comment 1 Daniel Pivonka 2021-03-26 20:28:14 UTC
upstream docs had the wrong flag names   pr to fix: https://github.com/ceph/ceph/pull/40447

flags will be   --realm and --zone  once this is merged

using --rgw-realm and --rgw-zone dont do anything

both flags and positional arguments are supported

Comment 2 Ken Dreyer (Red Hat) 2021-03-31 21:58:22 UTC
Daniel do we need to backport https://github.com/ceph/ceph/pull/40447 to pacific?

Comment 3 Daniel Pivonka 2021-04-01 19:07:21 UTC
Ken it was already back ported in this pr from sage https://github.com/ceph/ceph/pull/40437

Comment 4 Ken Dreyer (Red Hat) 2021-04-05 18:41:30 UTC
Ok, thanks. That PR is in the v16.2.0 upstream tag, so it will be in today's downstream rebase.

Comment 10 Daniel Pivonka 2021-04-07 21:23:03 UTC
these flags --rgw-zone , --rgw-realm in addition to all the flags in this file https://github.com/ceph/ceph/blob/47d00f6503c721932547533a0ada89c221987736/src/common/legacy_config_opts.h

are accepted to every single ceph,ceph-mon,ceph-mgr,radosgw-admin,..... command 

it's the way the ceph CLI works basically any config option can be specified via the CLI for any ceph daemon/executable


examples:
[ceph: root@vm-00 /]# ceph -s --pid-file t            
  cluster:
    id:     827aca78-97e0-11eb-b38a-525400a09620
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum vm-00,vm-01,vm-02 (age 19m)
    mgr: vm-00.hehsce(active, since 22m), standbys: vm-02.bxtxvm
    osd: 3 osds: 3 up (since 19m), 3 in (since 19m)
 
  data:
    pools:   1 pools, 128 pgs
    objects: 0 objects, 0 B
    usage:   18 MiB used, 450 GiB / 450 GiB avail
    pgs:     128 active+clean

[ceph: root@vm-00 /]# ceph orch host ls --mempool_debug
HOST   ADDR            LABELS  STATUS  
vm-00  192.168.122.28                  
vm-01  vm-01                           
vm-02  vm-02   


[ceph: root@vm-00 /]# ceph config-key ls --bluestore_debug_inject_bug21040
[
    "config-history/10/",
    "config-history/10/+mgr/mgr/dashboard/ssl_server_port",
    "config-history/11/",
    "config-history/11/+mgr/mgr/dashboard/GRAFANA_API_SSL_VERIFY",
    "config-history/12/",
    "config-history/12/+mgr/mgr/dashboard/ALERTMANAGER_API_HOST",
    "config-history/13/",
    "config-history/13/+mgr/mgr/dashboard/PROMETHEUS_API_HOST",
    "config-history/14/",
    "config-history/14/+mgr/mgr/dashboard/GRAFANA_API_URL",
    "config-history/2/",
    "config-history/2/+global/container_image",
    "config-history/3/",
    "config-history/3/+mon/public_network",
    "config-history/4/",
    "config-history/4/+mgr/mgr/cephadm/migration_current",
    "config-history/5/",
    "config-history/5/+mgr/mgr/cephadm/migration_current",
    "config-history/5/-mgr/mgr/cephadm/migration_current",
    "config-history/6/",
    "config-history/6/+mgr/mgr/cephadm/migration_current",
    "config-history/6/-mgr/mgr/cephadm/migration_current",
    "config-history/7/",
    "config-history/7/+mgr/mgr/orchestrator/orchestrator",
    "config-history/8/",
    "config-history/8/+global/container_image",
    "config-history/8/-global/container_image",
    "config-history/9/",
    "config-history/9/+mgr/mgr/cephadm/container_init",
    "config/global/container_image",
    "config/mgr/mgr/cephadm/container_init",
    "config/mgr/mgr/cephadm/migration_current",
    "config/mgr/mgr/dashboard/ALERTMANAGER_API_HOST",
    "config/mgr/mgr/dashboard/GRAFANA_API_SSL_VERIFY",
    "config/mgr/mgr/dashboard/GRAFANA_API_URL",
    "config/mgr/mgr/dashboard/PROMETHEUS_API_HOST",
    "config/mgr/mgr/dashboard/ssl_server_port",
    "config/mgr/mgr/orchestrator/orchestrator",
    "config/mon/public_network",
    "mgr/cephadm/config_checks",
    "mgr/cephadm/grafana_crt",
    "mgr/cephadm/grafana_key",
    "mgr/cephadm/host.vm-00",
    "mgr/cephadm/host.vm-01",
    "mgr/cephadm/host.vm-02",
    "mgr/cephadm/inventory",
    "mgr/cephadm/osd_remove_queue",
    "mgr/cephadm/spec.alertmanager",
    "mgr/cephadm/spec.crash",
    "mgr/cephadm/spec.grafana",
    "mgr/cephadm/spec.mgr",
    "mgr/cephadm/spec.mon",
    "mgr/cephadm/spec.node-exporter",
    "mgr/cephadm/spec.osd.all-available-devices",
    "mgr/cephadm/spec.prometheus",
    "mgr/cephadm/ssh_identity_key",
    "mgr/cephadm/ssh_identity_pub",
    "mgr/cephadm/ssh_user",
    "mgr/dashboard/accessdb_v2",
    "mgr/dashboard/crt",
    "mgr/dashboard/key",
    "mgr/devicehealth/last_scrape",
    "mgr/progress/completed",
    "mgr/telemetry/report_id",
    "mgr/telemetry/salt"
]


the problem here was the documentation was wrong thats been fixed and verified

if these flags being accepted here is a problem, there is a much bigger problem outside the scope of cephadm

Comment 13 errata-xmlrpc 2021-08-30 08:29:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.