Bug 1931909 - RGW-NFS failure - unable to create nfs-export in dashboard
Summary: RGW-NFS failure - unable to create nfs-export in dashboard
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 5.0
Assignee: Daniel Pivonka
QA Contact: Vidushi Mishra
Karen Norteman
URL:
Whiteboard:
Depends On:
Blocks: 1941996
TreeView+ depends on / blocked
 
Reported: 2021-02-23 14:45 UTC by Sunil Kumar Nagaraju
Modified: 2021-08-30 08:28 UTC (History)
11 users (show)

Fixed In Version: ceph-16.2.0-13.el8cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1941996 (view as bug list)
Environment:
Last Closed: 2021-08-30 08:28:20 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-1052 0 None None None 2021-08-27 05:15:03 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:28:39 UTC

Comment 4 Juan Miguel Olmo 2021-04-05 09:18:41 UTC
What is the current status of this bug? Daniel, can you review the procedure used by Sunil?

Comment 5 Daniel Pivonka 2021-04-06 21:41:10 UTC
i was able to deploy rgw-nfs successfully following these steps

radosgw-admin realm create --rgw-realm=test_realm --default
radosgw-admin zonegroup create --rgw-zonegroup=test_group --rgw-realm=test_realm --master --default
radosgw-admin zone create --rgw-zonegroup=test_group --rgw-zone=test_zone --rgw-realm=test_realm --master --default
radosgw-admin user create --uid=test_user --display-name=TEST_USER --system
radosgw-admin period update --rgw-realm=test_realm --commit
ceph orch apply rgw example_service_id test_realm test_zone
echo -n "{'example_service_id.vm-00.xwdvrg': '4MDe2c8MTx6ztGEecMzRbNMo8COdEjTbft3MK8JW', 'example_service_id.vm-02.sqaitd': '4MDe2c8MTx6ztGEecMzRbNMo8COdEjTbft3MK8JW'}" > sk
echo -n "{'example_service_id.vm-00.xwdvrg': 'HK2IS42BTV3AM7FH3HMY', 'example_service_id.vm-02.sqaitd': 'HK2IS42BTV3AM7FH3HMY'}" > ak
ceph dashboard set-rgw-api-secret-key -i sk
ceph dashboard set-rgw-api-access-key -i ak
ceph osd pool create nfs-ganesha
ceph osd pool application enable nfs-ganesha rgw
ceph orch apply nfs foo --pool nfs-ganesha --namespace foo
ceph dashboard set-ganesha-clusters-rados-pool-namespace nfs-ganesha/foo

create export in dashboard https://pasteboard.co/JwR6MHD.png



the service was listed in ceph status 

[ceph: root@vm-00 /]# ceph -s
  cluster:
    id:     2081a11e-971f-11eb-b9fe-52540090618c
    health: HEALTH_WARN
            1 stray daemon(s) not managed by cephadm

 
  services:
    mon:     3 daemons, quorum vm-00,vm-02,vm-01 (age 4m)
    mgr:     vm-00.zjprhg(active, since 7m), standbys: vm-02.gwmnsf
    osd:     3 osds: 3 up (since 4m), 3 in (since 4m)
    rgw:     2 daemons active (2 hosts, 1 zones)
    rgw-nfs: 1 daemon active (1 hosts, 1 zones)
 
  data:
    pools:   7 pools, 193 pgs
    objects: 349 objects, 28 KiB
    usage:   39 MiB used, 450 GiB / 450 GiB avail
    pgs:     193 active+clean
 
  io:
    client:   67 KiB/s rd, 782 B/s wr, 132 op/s rd, 3 op/s wr


and there were no problems/errors in the dashboard





there was 1 other problem though

the nfs daemon is being marked as stray
 
im investigating that now

Comment 6 Daniel Pivonka 2021-04-07 19:15:30 UTC
this command in the list of steps above    'ceph osd pool application enable nfs-ganesha rgw'

should actually be    'ceph osd pool application enable nfs-ganesha nfs'

its functionally the same your just naming the pool application so it can really be anything.  'nfs' is the technically correct application though



please retest this

Comment 7 Varsha 2021-04-12 18:21:26 UTC
Export creation is not successful.

Steps to reproduce:
1. Deploy nfs service
2. Create cephfs/rgw export

No error is reported by dashboard on export creation. But export object url is not written to ganesha conf object. 

[root@localhost build]# ./bin/rados -p nfs-ganesha -N test-nfs-dashboard ls
grace
conf-nfs.test-nfs-dashboard
rec-0000000000000002:nfs.test-nfs-dashboard.localhost
export-1
export-2
[root@localhost build]# ./bin/rados -p nfs-ganesha -N test-nfs-dashboard get conf-nfs.test-nfs-dashboard -

[root@localhost build]# ./bin/rados -p nfs-ganesha -N test-nfs-dashboard get export-2 -
EXPORT {
    export_id = 2;
    path = "/";
    pseudo = "/testFS";
    access_type = "RW";
    squash = "no_root_squash";
    protocols = 4;
    transports = "TCP";
    FSAL {
        name = "CEPH";
        user_id = "crash.localhost.localdomain";
        filesystem = "a";
        secret_access_key = "AQCLh3Rg9dZzIhAAuD6JM7iwy2h4dqzEpTvdlQ==";
    }

}


I don't see any error in mgr log.

2021-04-12T23:33:24.857+0530 7f45f462a640  0 [dashboard DEBUG ganesha] write configuration into rados object nfs-ganesha/test-nfs-dashboard/export-2:
EXPORT {
    export_id = 2;
    path = "/";
    pseudo = "/testFS";
    access_type = "RW";
    squash = "no_root_squash";
    protocols = 4;
    transports = "TCP";
    FSAL {
        name = "CEPH";
        user_id = "crash.localhost.localdomain";
        filesystem = "a";
        secret_access_key = "AQCLh3Rg9dZzIhAAuD6JM7iwy2h4dqzEpTvdlQ==";
    }

}


2021-04-12T23:33:24.857+0530 7f45f462a640  1 -- 192.168.0.138:0/3451055247 --> [v2:192.168.0.138:6802/172378,v1:192.168.0.138:6803/172378] -- osd_op(unknown.0.0:2699 4.2 4:470071bb:test-nfs-dashboard::conf-nfs.test-nfs-dashboard:head [writefull 0~0] snapc 0=[] ondisk+write+known_if_redirected e212) v8 -- 0x55d1ad6ed400 con 0x55d1ad122d80
2021-04-12T23:33:24.862+0530 7f464bcc9640  1 -- 192.168.0.138:0/3451055247 <== osd.0 v2:192.168.0.138:6802/172378 2738 ==== osd_op_reply(2699 conf-nfs.test-nfs-dashboard [writefull 0~0] v212'4 uv1 ondisk = 0) v8 ==== 171+0+0 (crc 0 0 0) 0x55d1abf0c240 con 0x55d1ad122d80
2021-04-12T23:33:24.862+0530 7f45f462a640  0 [dashboard DEBUG ganesha] write configuration into rados object nfs-ganesha/test-nfs-dashboard/conf-nfs.test-nfs-dashboard:

2021-04-12T23:33:24.862+0530 7f45f462a640  1 -- 192.168.0.138:0/3451055247 --> [v2:192.168.0.138:6802/172378,v1:192.168.0.138:6803/172378] -- osd_op(unknown.0.0:2700 4.2 4:470071bb:test-nfs-dashboard::conf-nfs.test-nfs-dashboard:head [notify cookie 94359023357312 in=12b] snapc 0=[] ondisk+read+known_if_redirected e212) v8 -- 0x55d1ad6ed800 con 0x55d1ad122d80
2021-04-12T23:33:24.864+0530 7f464bcc9640  1 -- 192.168.0.138:0/3451055247 <== osd.0 v2:192.168.0.138:6802/172378 2739 ==== osd_op_reply(2700 conf-nfs.test-nfs-dashboard [notify cookie 94359023357312 out=8b] v0'0 uv1 ondisk = 0) v8 ==== 171+0+8 (crc 0 0 0) 0x55d1abf0c240 con 0x55d1ad122d80
2021-04-12T23:33:24.865+0530 7f464bcc9640  1 -- 192.168.0.138:0/3451055247 <== osd.0 v2:192.168.0.138:6802/172378 2740 ==== watch-notify(notify_complete (2) cookie 94359023357312 notify 910533066771 ret 0) v3 ==== 42+0+28 (crc 0 0 0) 0x55d1abfd01a0 con 0x55d1ad122d80
2021-04-12T23:33:24.865+0530 7f45f462a640  0 [dashboard DEBUG taskexec] successfully finished task: Task(ns=nfs/create, md={'path': '/', 'fsal': 'CEPH', 'cluster_id': 'test-nfs-dashboard'})
2021-04-12T23:33:24.865+0530 7f45f462a640  0 [dashboard DEBUG task] execution of Task(ns=nfs/create, md={'path': '/', 'fsal': 'CEPH', 'cluster_id': 'test-nfs-dashboard'}) finished in: 0.15959858894348145 s

Comment 8 Daniel Pivonka 2021-04-14 18:27:33 UTC
>>> [root@vm-00 ~]# ./cephadm bootstrap --mon-ip 192.168.122.148 --initial-dashboard-password admin  --dashboard-password-noupdate
>>> Creating directory /etc/ceph for ceph.conf
>>> Verifying podman|docker is present...
>>> Verifying lvm2 is present...
>>> Verifying time synchronization is in place...
>>> Unit chronyd.service is enabled and running
>>> Repeating the final host check...
>>> podman|docker (/usr/bin/podman) is present
>>> systemctl is present
>>> lvcreate is present
>>> Unit chronyd.service is enabled and running
>>> Host looks OK
>>> Cluster fsid: a89470d2-9d4a-11eb-b3ae-52540015770c
>>> Verifying IP 192.168.122.148 port 3300 ...
>>> Verifying IP 192.168.122.148 port 6789 ...
>>> Mon IP 192.168.122.148 is in CIDR network 192.168.122.0/24
>>> - internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
>>> Pulling container image docker.io/ceph/ceph:v16...
>>> Ceph version: ceph version 16.2.0 (0c2054e95bcd9b30fdd908a79ac1d8bbc3394442) pacific (stable)
>>> Extracting ceph user uid/gid from container image...
>>> Creating initial keys...
>>> Creating initial monmap...
>>> Creating mon...
>>> Waiting for mon to start...
>>> Waiting for mon...
>>> mon is available
>>> Assimilating anything we can from ceph.conf...
>>> Generating new minimal ceph.conf...
>>> Restarting the monitor...
>>> Setting mon public_network to 192.168.122.0/24
>>> Wrote config to /etc/ceph/ceph.conf
>>> Wrote keyring to /etc/ceph/ceph.client.admin.keyring
>>> Creating mgr...
>>> Verifying port 9283 ...
>>> Waiting for mgr to start...
>>> Waiting for mgr...
>>> mgr not available, waiting (1/15)...
>>> mgr not available, waiting (2/15)...
>>> mgr not available, waiting (3/15)...
>>> mgr is available
>>> Enabling cephadm module...
>>> Waiting for the mgr to restart...
>>> Waiting for mgr epoch 5...
>>> mgr epoch 5 is available
>>> Setting orchestrator backend to cephadm...
>>> Generating ssh key...
>>> Wrote public SSH key to /etc/ceph/ceph.pub
>>> Adding key to root@localhost authorized_keys...
>>> Adding host vm-00...
>>> Deploying mon service with default placement...
>>> Deploying mgr service with default placement...
>>> Deploying crash service with default placement...
>>> Enabling mgr prometheus module...
>>> Deploying prometheus service with default placement...
>>> Deploying grafana service with default placement...
>>> Deploying node-exporter service with default placement...
>>> Deploying alertmanager service with default placement...
>>> Enabling the dashboard module...
>>> Waiting for the mgr to restart...
>>> Waiting for mgr epoch 13...
>>> mgr epoch 13 is available
>>> Generating a dashboard self-signed certificate...
>>> Creating initial admin user...
>>> Fetching dashboard port number...
>>> Ceph Dashboard is now available at:
>>> 
>>> 	     URL: https://vm-00:8443/
>>> 	    User: admin
>>> 	Password: admin
>>> 
>>> You can access the Ceph CLI with:
>>> 
>>> 	sudo ./cephadm shell --fsid a89470d2-9d4a-11eb-b3ae-52540015770c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
>>> 
>>> Please consider enabling telemetry to help improve Ceph:
>>> 
>>> 	ceph telemetry on
>>> 
>>> For more information see:
>>> 
>>> 	https://docs.ceph.com/docs/pacific/mgr/telemetry/
>>> 
>>> Bootstrap complete.
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@vm-01
>>> /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/ceph/ceph.pub"
>>> The authenticity of host 'vm-01 (192.168.122.222)' can't be established.
>>> ECDSA key fingerprint is SHA256:MzItVrbFAl6Rdz0Yq0DMTZ+Sg/AVeAGy1MnOOQ48Z/c.
>>> Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
>>> 
>>> Number of key(s) added: 1
>>> 
>>> Now try logging into the machine, with:   "ssh 'root@vm-01'"
>>> and check to make sure that only the key(s) you wanted were added.
>>> 
>>> [root@vm-00 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@vm-02
>>> /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/ceph/ceph.pub"
>>> The authenticity of host 'vm-02 (192.168.122.113)' can't be established.
>>> ECDSA key fingerprint is SHA256:B7m/vVuax5yYJJVlKu0Gp4/uuyczMbnSPfL2XGH4zI0.
>>> Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
>>> 
>>> Number of key(s) added: 1
>>> 
>>> Now try logging into the machine, with:   "ssh 'root@vm-02'"
>>> and check to make sure that only the key(s) you wanted were added.
>>> 
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# 
>>> [root@vm-00 ~]# ./cephadm shell
>>> Inferring fsid a89470d2-9d4a-11eb-b3ae-52540015770c
>>> Inferring config /var/lib/ceph/a89470d2-9d4a-11eb-b3ae-52540015770c/mon.vm-00/config
>>> Using recent ceph image docker.io/ceph/ceph@sha256:9b04c0f15704c49591640a37c7adfd40ffad0a4b42fecb950c3407687cb4f29a
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# ceph orch host add vm-01
>>> Added host 'vm-01'
>>> [ceph: root@vm-00 /]# ceph orch host add vm-02
>>> Added host 'vm-02'
>>> [ceph: root@vm-00 /]# ceph orch apply osd --all-available-devices
>>> Scheduled osd.all-available-devices update...
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# ceph -s
>>>   cluster:
>>>     id:     a89470d2-9d4a-11eb-b3ae-52540015770c
>>>     health: HEALTH_WARN
>>>             OSD count 0 < osd_pool_default_size 3
>>>  
>>>   services:
>>>     mon: 1 daemons, quorum vm-00 (age 2m)
>>>     mgr: vm-00.zzeinv(active, since 94s)
>>>     osd: 0 osds: 0 up, 0 in
>>>  
>>>   data:
>>>     pools:   0 pools, 0 pgs
>>>     objects: 0 objects, 0 B
>>>     usage:   0 B used, 0 B / 0 B avail
>>>     pgs:     
>>>  
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# ceph -s
>>>   cluster:
>>>     id:     a89470d2-9d4a-11eb-b3ae-52540015770c
>>>     health: HEALTH_OK
>>>  
>>>   services:
>>>     mon: 3 daemons, quorum vm-00,vm-01,vm-02 (age 20s)
>>>     mgr: vm-00.zzeinv(active, since 3m), standbys: vm-01.nojbtt
>>>     osd: 3 osds: 3 up (since 3s), 3 in (since 17s)
>>>  
>>>   data:
>>>     pools:   1 pools, 1 pgs
>>>     objects: 0 objects, 0 B
>>>     usage:   15 MiB used, 450 GiB / 450 GiB avail
>>>     pgs:     100.000% pgs not active
>>>              1 creating+peering
>>>  
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# radosgw-admin realm create --rgw-realm=test_realm --default
>>> {
>>>     "id": "b30b2a73-3f7e-4f2f-86cc-6fa87c40cf54",
>>>     "name": "test_realm",
>>>     "current_period": "0bf17c0a-2127-4681-97cd-11d91b77cfab",
>>>     "epoch": 1
>>> }
>>> [ceph: root@vm-00 /]# radosgw-admin zonegroup create --rgw-zonegroup=test_group --rgw-realm=test_realm --master --default
>>> {
>>>     "id": "479da4ed-412e-435c-9347-b14c2dc509ab",
>>>     "name": "test_group",
>>>     "api_name": "test_group",
>>>     "is_master": "true",
>>>     "endpoints": [],
>>>     "hostnames": [],
>>>     "hostnames_s3website": [],
>>>     "master_zone": "",
>>>     "zones": [],
>>>     "placement_targets": [],
>>>     "default_placement": "",
>>>     "realm_id": "b30b2a73-3f7e-4f2f-86cc-6fa87c40cf54",
>>>     "sync_policy": {
>>>         "groups": []
>>>     }
>>> }
>>> [ceph: root@vm-00 /]# radosgw-admin zone create --rgw-zonegroup=test_group --rgw-zone=test_zone --rgw-realm=test_realm --master --default
>>> {
>>>     "id": "5472f11e-23f9-4107-a10c-4f9bcddaaf8e",
>>>     "name": "test_zone",
>>>     "domain_root": "test_zone.rgw.meta:root",
>>>     "control_pool": "test_zone.rgw.control",
>>>     "gc_pool": "test_zone.rgw.log:gc",
>>>     "lc_pool": "test_zone.rgw.log:lc",
>>>     "log_pool": "test_zone.rgw.log",
>>>     "intent_log_pool": "test_zone.rgw.log:intent",
>>>     "usage_log_pool": "test_zone.rgw.log:usage",
>>>     "roles_pool": "test_zone.rgw.meta:roles",
>>>     "reshard_pool": "test_zone.rgw.log:reshard",
>>>     "user_keys_pool": "test_zone.rgw.meta:users.keys",
>>>     "user_email_pool": "test_zone.rgw.meta:users.email",
>>>     "user_swift_pool": "test_zone.rgw.meta:users.swift",
>>>     "user_uid_pool": "test_zone.rgw.meta:users.uid",
>>>     "otp_pool": "test_zone.rgw.otp",
>>>     "system_key": {
>>>         "access_key": "",
>>>         "secret_key": ""
>>>     },
>>>     "placement_pools": [
>>>         {
>>>             "key": "default-placement",
>>>             "val": {
>>>                 "index_pool": "test_zone.rgw.buckets.index",
>>>                 "storage_classes": {
>>>                     "STANDARD": {
>>>                         "data_pool": "test_zone.rgw.buckets.data"
>>>                     }
>>>                 },
>>>                 "data_extra_pool": "test_zone.rgw.buckets.non-ec",
>>>                 "index_type": 0
>>>             }
>>>         }
>>>     ],
>>>     "realm_id": "b30b2a73-3f7e-4f2f-86cc-6fa87c40cf54",
>>>     "notif_pool": "test_zone.rgw.log:notif"
>>> }
>>> [ceph: root@vm-00 /]# radosgw-admin user create --uid=test_user --display-name=TEST_USER --system
>>> {
>>>     "user_id": "test_user",
>>>     "display_name": "TEST_USER",
>>>     "email": "",
>>>     "suspended": 0,
>>>     "max_buckets": 1000,
>>>     "subusers": [],
>>>     "keys": [
>>>         {
>>>             "user": "test_user",
>>>             "access_key": "8MU318RFEG7640QZYL6K",
>>>             "secret_key": "OqD8T1MgYAkWwavMyohddexCVJZIX0BHQMrp0ogt"
>>>         }
>>>     ],
>>>     "swift_keys": [],
>>>     "caps": [],
>>>     "op_mask": "read, write, delete",
>>>     "system": "true",
>>>     "default_placement": "",
>>>     "default_storage_class": "",
>>>     "placement_tags": [],
>>>     "bucket_quota": {
>>>         "enabled": false,
>>>         "check_on_raw": false,
>>>         "max_size": -1,
>>>         "max_size_kb": 0,
>>>         "max_objects": -1
>>>     },
>>>     "user_quota": {
>>>         "enabled": false,
>>>         "check_on_raw": false,
>>>         "max_size": -1,
>>>         "max_size_kb": 0,
>>>         "max_objects": -1
>>>     },
>>>     "temp_url_keys": [],
>>>     "type": "rgw",
>>>     "mfa_ids": []
>>> }
>>> 
>>> [ceph: root@vm-00 /]# radosgw-admin period update --rgw-realm=test_realm --commit
>>> {
>>>     "id": "20384e7f-6abe-464d-88d7-c66ad94a74d4",
>>>     "epoch": 1,
>>>     "predecessor_uuid": "0bf17c0a-2127-4681-97cd-11d91b77cfab",
>>>     "sync_status": [],
>>>     "period_map": {
>>>         "id": "20384e7f-6abe-464d-88d7-c66ad94a74d4",
>>>         "zonegroups": [
>>>             {
>>>                 "id": "479da4ed-412e-435c-9347-b14c2dc509ab",
>>>                 "name": "test_group",
>>>                 "api_name": "test_group",
>>>                 "is_master": "true",
>>>                 "endpoints": [],
>>>                 "hostnames": [],
>>>                 "hostnames_s3website": [],
>>>                 "master_zone": "5472f11e-23f9-4107-a10c-4f9bcddaaf8e",
>>>                 "zones": [
>>>                     {
>>>                         "id": "5472f11e-23f9-4107-a10c-4f9bcddaaf8e",
>>>                         "name": "test_zone",
>>>                         "endpoints": [],
>>>                         "log_meta": "false",
>>>                         "log_data": "false",
>>>                         "bucket_index_max_shards": 11,
>>>                         "read_only": "false",
>>>                         "tier_type": "",
>>>                         "sync_from_all": "true",
>>>                         "sync_from": [],
>>>                         "redirect_zone": ""
>>>                     }
>>>                 ],
>>>                 "placement_targets": [
>>>                     {
>>>                         "name": "default-placement",
>>>                         "tags": [],
>>>                         "storage_classes": [
>>>                             "STANDARD"
>>>                         ]
>>>                     }
>>>                 ],
>>>                 "default_placement": "default-placement",
>>>                 "realm_id": "b30b2a73-3f7e-4f2f-86cc-6fa87c40cf54",
>>>                 "sync_policy": {
>>>                     "groups": []
>>>                 }
>>>             }
>>>         ],
>>>         "short_zone_ids": [
>>>             {
>>>                 "key": "5472f11e-23f9-4107-a10c-4f9bcddaaf8e",
>>>                 "val": 3216798615
>>>             }
>>>         ]
>>>     },
>>>     "master_zonegroup": "479da4ed-412e-435c-9347-b14c2dc509ab",
>>>     "master_zone": "5472f11e-23f9-4107-a10c-4f9bcddaaf8e",
>>>     "period_config": {
>>>         "bucket_quota": {
>>>             "enabled": false,
>>>             "check_on_raw": false,
>>>             "max_size": -1,
>>>             "max_size_kb": 0,
>>>             "max_objects": -1
>>>         },
>>>         "user_quota": {
>>>             "enabled": false,
>>>             "check_on_raw": false,
>>>             "max_size": -1,
>>>             "max_size_kb": 0,
>>>             "max_objects": -1
>>>         }
>>>     },
>>>     "realm_id": "b30b2a73-3f7e-4f2f-86cc-6fa87c40cf54",
>>>     "realm_name": "test_realm",
>>>     "realm_epoch": 2
>>> }
>>> [ceph: root@vm-00 /]# ceph orch apply rgw example_service_id test_realm test_zone
>>> Scheduled rgw.example_service_id update...
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# ceph orch ps
>>> NAME                                 HOST   STATUS         REFRESHED  AGE  PORTS          VERSION  IMAGE ID      CONTAINER ID  
>>> alertmanager.vm-00                   vm-00  running (13m)  119s ago   17m  *:9093 *:9094  0.20.0   0881eb8f169f  41e66dfb8bc6  
>>> crash.vm-00                          vm-00  running (17m)  119s ago   17m  -              16.2.0   24ecd6d5f14c  4cdf022f0007  
>>> crash.vm-01                          vm-01  running (15m)  119s ago   15m  -              16.2.0   24ecd6d5f14c  c7d9c0bc1253  
>>> crash.vm-02                          vm-02  running (14m)  2m ago     14m  -              16.2.0   24ecd6d5f14c  c855d684b6e6  
>>> grafana.vm-00                        vm-00  running (16m)  119s ago   16m  *:3000         6.7.4    80728b29ad3f  653032f2f691  
>>> mgr.vm-00.zzeinv                     vm-00  running (18m)  119s ago   18m  *:9283         16.2.0   24ecd6d5f14c  17b1e5d38938  
>>> mgr.vm-01.nojbtt                     vm-01  running (15m)  119s ago   15m  *:8443 *:9283  16.2.0   24ecd6d5f14c  8752083ae352  
>>> mon.vm-00                            vm-00  running (18m)  119s ago   18m  -              16.2.0   24ecd6d5f14c  3e40f8b41d2d  
>>> mon.vm-01                            vm-01  running (15m)  119s ago   15m  -              16.2.0   24ecd6d5f14c  436ad7b78c6d  
>>> mon.vm-02                            vm-02  running (14m)  2m ago     14m  -              16.2.0   24ecd6d5f14c  c3d16eaf0e01  
>>> node-exporter.vm-00                  vm-00  running (16m)  119s ago   16m  *:9100         0.18.1   e5a616e4b9cf  89d9efec163f  
>>> node-exporter.vm-01                  vm-01  running (15m)  119s ago   15m  *:9100         0.18.1   e5a616e4b9cf  021d5ee8d761  
>>> node-exporter.vm-02                  vm-02  running (14m)  2m ago     14m  *:9100         0.18.1   e5a616e4b9cf  48ffbf247b7b  
>>> osd.0                                vm-01  running (14m)  119s ago   14m  -              16.2.0   24ecd6d5f14c  67b85d510df6  
>>> osd.1                                vm-00  running (14m)  119s ago   14m  -              16.2.0   24ecd6d5f14c  42fde61a5621  
>>> osd.2                                vm-02  running (13m)  2m ago     13m  -              16.2.0   24ecd6d5f14c  420dc28e40a5  
>>> prometheus.vm-00                     vm-00  running (13m)  119s ago   16m  *:9095         2.18.1   de242295e225  883be15ac32b  
>>> rgw.example_service_id.vm-00.unithu  vm-00  running (12m)  119s ago   12m  *:80           16.2.0   24ecd6d5f14c  28b66da4a3f2  
>>> rgw.example_service_id.vm-01.sghwhe  vm-01  running (12m)  119s ago   12m  *:80           16.2.0   24ecd6d5f14c  80f5875a9ae9  
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# echo -n "{'example_service_id.vm-00.unithu': 'OqD8T1MgYAkWwavMyohddexCVJZIX0BHQMrp0ogt', 'example_service_id.vm-01.sghwhe': 'OqD8T1MgYAkWwavMyohddexCVJZIX0BHQMrp0ogt'}" > sk
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# echo -n "{'example_service_id.vm-00.unithu': '8MU318RFEG7640QZYL6K', 'example_service_id.vm-01.sghwhe': '8MU318RFEG7640QZYL6K'}" > ak
>>> [ceph: root@vm-00 /]# ceph dashboard set-rgw-api-secret-key -i sk
>>> Option RGW_API_SECRET_KEY updated
>>> [ceph: root@vm-00 /]# ceph dashboard set-rgw-api-access-key -i ak
>>> Option RGW_API_ACCESS_KEY updated
>>> [ceph: root@vm-00 /]# ceph osd pool create nfs-ganesha
>>> pool 'nfs-ganesha' created
>>> [ceph: root@vm-00 /]# ceph osd pool application enable nfs-ganesha rgw
>>> enabled application 'rgw' on pool 'nfs-ganesha'
>>> [ceph: root@vm-00 /]# ceph orch apply nfs foo --pool nfs-ganesha --namespace foo
>>> Scheduled nfs.foo update...
>>> [ceph: root@vm-00 /]# ceph dashboard set-ganesha-clusters-rados-pool-namespace nfs-ganesha/foo
>>> Option GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE updated
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# # CREATE EXPORT IN DASHBOARD
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# ceph -s
>>>   cluster:
>>>     id:     a89470d2-9d4a-11eb-b3ae-52540015770c
>>>     health: HEALTH_OK
>>>  
>>>   services:
>>>     mon:     3 daemons, quorum vm-00,vm-01,vm-02 (age 16m)
>>>     mgr:     vm-00.zzeinv(active, since 19m), standbys: vm-01.nojbtt
>>>     osd:     3 osds: 3 up (since 15m), 3 in (since 16m)
>>>     rgw:     2 daemons active (2 hosts, 1 zones)
>>>     rgw-nfs: 1 daemon active (1 hosts, 1 zones)
>>>  
>>>   data:
>>>     pools:   7 pools, 169 pgs
>>>     objects: 349 objects, 28 KiB
>>>     usage:   67 MiB used, 450 GiB / 450 GiB avail
>>>     pgs:     169 active+clean
>>>  
>>>   io:
>>>     client:   90 KiB/s rd, 834 B/s wr, 173 op/s rd, 5 op/s wr
>>>  
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# 
>>> [ceph: root@vm-00 /]# rados --pool nfs-ganesha --namespace foo ls
>>> grace
>>> rec-0000000000000002:nfs.foo.vm-01
>>> conf-nfs.foo
>>> export-1
>>> [ceph: root@vm-00 /]# rados --pool nfs-ganesha --namespace foo get conf-nfs.foo -
>>> %url "rados://nfs-ganesha/foo/export-1"
>>> 
>>> [ceph: root@vm-00 /]# 


Everything is working as expected when i tested on pacific

Comment 9 Daniel Pivonka 2021-04-14 18:50:26 UTC
same behavior downstream Everything is working as expected (besides the stray daemon which im working on getting the fix merged upstream)

>>> [ceph: root@vm-00 /]# ceph -s
>>>   cluster:
>>>     id:     3a64cac0-9d50-11eb-8969-5254006c3edc
>>>     health: HEALTH_WARN
>>>             1 stray daemon(s) not managed by cephadm
>>>  
>>>   services:
>>>     mon:     3 daemons, quorum vm-00,vm-01,vm-02 (age 4m)
>>>     mgr:     vm-00.licxyg(active, since 6m), standbys: vm-01.dapill
>>>     osd:     3 osds: 3 up (since 3m), 3 in (since 3m)
>>>     rgw:     2 daemons active (2 hosts, 1 zones)
>>>     rgw-nfs: 1 daemon active (1 hosts, 1 zones)
>>>  
>>>   data:
>>>     pools:   7 pools, 169 pgs
>>>     objects: 223 objects, 8.7 KiB
>>>     usage:   48 MiB used, 450 GiB / 450 GiB avail
>>>     pgs:     169 active+clean
>>>  
>>>   io:
>>>     client:   1.1 KiB/s rd, 1 op/s rd, 0 op/s wr
>>>  
>>>   progress:
>>>     Global Recovery Event (22s)
>>>       [===========================.] 
>>>  
>>> [ceph: root@vm-00 /]# rados --pool nfs-ganesha --namespace foo ls
>>> grace
>>> rec-0000000000000002:nfs.foo.vm-02
>>> conf-nfs.foo
>>> export-1
>>> [ceph: root@vm-00 /]# rados --pool nfs-ganesha --namespace foo get conf-nfs.foo -
>>> %url "rados://nfs-ganesha/foo/export-1"
>>> 
>>> [ceph: root@vm-00 /]# ceph --version
>>> ceph version 16.2.0-4.el8cp (987b1d2838ad9c505a6f557f32ee75c1e3ed7028) pacific (stable)
>>> [ceph: root@vm-00 /]# ceph config get mon container_image
>>> registry.redhat.io/rhceph-beta/rhceph-5-rhel8@sha256:24c617082680ef85c43c6e2c4fe462c69805d2f38df83e51f968cec6b1c097a2
>>> [ceph: root@vm-00 /]#

Comment 13 Daniel Pivonka 2021-04-16 17:11:17 UTC
additionally the fix for the stray daemon issue mentioned above has been merged upstream master https://github.com/ceph/ceph/pull/40711

Comment 14 Daniel Pivonka 2021-04-21 20:48:41 UTC
@vimishra looks like everything is working as expected, you are able create nfs-export in dashboard and preform io, can you move this to verified if so.  the stray daemon issue should be its own bz

Comment 20 errata-xmlrpc 2021-08-30 08:28:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.