Bug 2017821
Summary: | [RFE]: Cephadm NFS cluster creates RGW exports at bucket level only | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Tejas <tchandra> |
Component: | Cephadm | Assignee: | Adam King <adking> |
Status: | CLOSED ERRATA | QA Contact: | Tejas <tchandra> |
Severity: | high | Docs Contact: | Ranjini M N <rmandyam> |
Priority: | medium | ||
Version: | 5.1 | CC: | adking, agunn, bniver, cbodley, gsitlani, rlepaksh, rmandyam, tserlin |
Target Milestone: | --- | Keywords: | FutureFeature, Rebase, TestBlocker |
Target Release: | 5.1 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-16.2.7-14.el8cp | Doc Type: | Enhancement |
Doc Text: |
.The `ceph nfs export create rgw` command now supports exporting Ceph Object Gateway users
Previously, the `ceph nfs export create rgw` command would only create Ceph Object Gateway exports at the bucket level.
With this release, the command creates the Ceph Object Gateway exports at both the user and bucket level.
.Syntax
[source,subs="verbatim,quotes"]
----
ceph nfs export create rgw --cluster-id _CLUSTER_ID_ --pseudo-path _PSEUDO_PATH_ --user-id _USER_ID_ [--readonly] [--client_addr _VALUE_...] [--squash _VALUE_]
----
.Example
----
[ceph: root@host01 /]# ceph nfs export create rgw --cluster-id mynfs --pseudo-path /bucketdata --user-id myuser --client_addr 192.168.10.0/24
----
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2022-04-04 10:22:22 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 2018248 | ||
Bug Blocks: | 2031073 |
Comment 2
Sebastian Wagner
2021-10-27 13:39:29 UTC
Casey, do you know if there is downstream bz for https://tracker.ceph.com/issues/53030 ? cause this is blocking this BZ I don't know. If you need one, you might want to go ahead and create it. i see that Matt opened https://bugzilla.redhat.com/show_bug.cgi?id=2018248 This is in flight in upstream right now. leaving it in 5.1 as this is basically a regression to 4 https://github.com/ceph/ceph/pull/43811 is now in pacific. Waiting for a rebase Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:1174 |