Description of problem: support needed from cephadm to configure nfs + RGW
NFS RGW: This pr is required for this to work but has not been merged yet https://github.com/ceph/ceph/pull/37600 create 3 node cluster with osds radosgw-admin realm create --rgw-realm=test_realm --default radosgw-admin zonegroup create --rgw-zonegroup=default --rgw-realm=test_realm --master --default radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=test_zone --rgw-realm=test_realm --master --default radosgw-admin period update --rgw-realm=test_realm --commit ceph orch apply rgw test_realm test_zone radosgw-admin user create --uid=test_user --display-name=TEST_USER --system ceph dashboard set-rgw-api-access-key <access_key> ceph dashboard set-rgw-api-secret-key <secret_key> ceph osd pool create nfs-ganesha ceph orch apply nfs foo --pool nfs-ganesha --namespace foo ceph dashboard set-ganesha-clusters-rados-pool-namespace nfs-ganesha/foo go to dashboard create nfs export (example settings below) https://pasteboard.co/JwR6MHD.png Test file transfer: Put file in rgw bucket Dnf install s3cmd s3cmd --configure (access-key, secret-key, us, rgwhost:80, rgwhost:80, <blank>, <blank>, no, <blank>, yes, yes) vi /home/dpivonka/.s3cfg -> (signature_v2 = True) s3cmd put TEST_FILE s3://rgwtest <----path name for nfs export setup See if it shows up on nfs mount : Dnf install nf-utils systemctl start nfs-server sudo mount -t nfs -o port=2049 {nfs-ip}:<psuedo> /mnt <----- pseudo from nfs export setup ls /mnt <------ TEST_file should be there
*** Bug 1967254 has been marked as a duplicate of this bug. ***
Veera, I assume given the 'workaround' (which I don't like at all), we can remove the blocker? flag here and defer to 5.1?
BZ 1969991 has bee verified to check NFS-Ganesha/RGW during 4.x to 5.0 Upgrade and alert on Upgrade not supported. So moving this BZ to 5.1
NFS-Ganesha Upgrade check BZ 1970003 [BZ 1969991 - Doc BZ]
PR was merged to upstream: https://github.com/ceph/ceph/pull/41574 need a backport
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:1174