Bug 1842808 - [RFE] : Configuration support of nfs on rgw using cephadm
Summary: [RFE] : Configuration support of nfs on rgw using cephadm
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 5.1
Assignee: Adam King
QA Contact: Tejas
Ranjini M N
URL:
Whiteboard:
: 1967254 (view as bug list)
Depends On:
Blocks: 1936095 1959686 1969991 2031073
TreeView+ depends on / blocked
 
Reported: 2020-06-02 07:20 UTC by Vasishta
Modified: 2022-04-04 10:20 UTC (History)
7 users (show)

Fixed In Version: ceph-16.2.6-1.el8cp
Doc Type: Enhancement
Doc Text:
.Configuration of NFS-RGW using Cephadm is now supported In {storage-product} 5.0 configuration, use of NFS-RGW required the use of dashboard as a workaround and it was recommended for such users to delay upgrade until {storage-product} 5.1 With this release, NFS-RGW configuration is supported and the users with this configuration can upgrade their storage cluster and it works as expected.
Clone Of:
Environment:
Last Closed: 2022-04-04 10:19:51 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 43686 0 None None None 2020-06-02 07:20:13 UTC
Red Hat Bugzilla 1962125 1 unspecified CLOSED [RGW : NFS] nfs-ganesha crash observed on doing 'rmdir' on nfs mount. 2021-08-30 08:35:25 UTC
Red Hat Product Errata RHSA-2022:1174 0 None None None 2022-04-04 10:20:26 UTC

Internal Links: 1967254

Description Vasishta 2020-06-02 07:20:14 UTC
Description of problem:

support needed from cephadm to configure nfs + RGW

Comment 1 Daniel Pivonka 2020-11-03 17:15:01 UTC
NFS RGW:


This pr is required for this to work but has not been merged yet  https://github.com/ceph/ceph/pull/37600


create 3 node cluster with osds

radosgw-admin realm create --rgw-realm=test_realm --default
radosgw-admin zonegroup create --rgw-zonegroup=default --rgw-realm=test_realm --master --default
radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=test_zone --rgw-realm=test_realm --master --default
radosgw-admin period update --rgw-realm=test_realm --commit
ceph orch apply rgw test_realm test_zone 
radosgw-admin user create --uid=test_user --display-name=TEST_USER --system 
ceph dashboard set-rgw-api-access-key <access_key>
ceph dashboard set-rgw-api-secret-key <secret_key>
ceph osd pool create nfs-ganesha
ceph orch apply nfs foo --pool nfs-ganesha --namespace foo
ceph dashboard set-ganesha-clusters-rados-pool-namespace nfs-ganesha/foo


go to dashboard create nfs export (example settings below)
https://pasteboard.co/JwR6MHD.png





Test file transfer:


Put file in rgw bucket

Dnf install s3cmd
s3cmd --configure  (access-key, secret-key, us, rgwhost:80, rgwhost:80, <blank>, <blank>, no, <blank>, yes, yes)
vi /home/dpivonka/.s3cfg  -> (signature_v2 = True)
s3cmd put TEST_FILE s3://rgwtest    <----path name for nfs export setup



See if it shows up on nfs mount :

Dnf install nf-utils
systemctl start nfs-server
sudo mount -t nfs -o port=2049 {nfs-ip}:<psuedo> /mnt      <----- pseudo from nfs export setup 
ls /mnt   <------ TEST_file should be there

Comment 5 Sebastian Wagner 2021-06-03 12:37:54 UTC
*** Bug 1967254 has been marked as a duplicate of this bug. ***

Comment 6 Yaniv Kaul 2021-06-23 12:33:42 UTC
Veera, I assume given the 'workaround' (which I don't like at all), we can remove the blocker? flag here and defer to 5.1?

Comment 7 Veera Raghava Reddy 2021-06-25 18:29:34 UTC
BZ 1969991 has bee verified to check NFS-Ganesha/RGW during 4.x to 5.0 Upgrade and alert on Upgrade not supported. So moving this BZ to 5.1

Comment 8 Veera Raghava Reddy 2021-06-25 18:31:52 UTC
NFS-Ganesha Upgrade check BZ 1970003 [BZ 1969991 - Doc BZ]

Comment 9 Sebastian Wagner 2021-07-06 10:12:51 UTC
PR was merged to upstream: https://github.com/ceph/ceph/pull/41574 need a backport

Comment 20 errata-xmlrpc 2022-04-04 10:19:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1174


Note You need to log in before you can comment on or make changes to this bug.