Bug 1326200 - [RFE] cephx user management for RBD images
Summary: [RFE] cephx user management for RBD images
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: RBD
Version: 1.3.2
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: 3.*
Assignee: Jason Dillaman
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks: 1258382
TreeView+ depends on / blocked
 
Reported: 2016-04-12 07:01 UTC by Vikhyat Umrao
Modified: 2020-12-11 12:09 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-01-30 14:50:16 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 15468 0 None None None 2016-04-12 07:11:16 UTC
Red Hat Knowledge Base (Solution) 2253201 0 None None None 2016-04-12 07:22:02 UTC

Description Vikhyat Umrao 2016-04-12 07:01:03 UTC
Description of problem:

[RFE] cephx user  management for RBD images

Version-Release number of selected component (if applicable):
Red Hat Ceph Storage 1.3 

With current cephx user implementation there is no way to achieve below given points :

1.  To restrict a user for a client host.
2.  To restrict a user for a RBD image.

This RFE will track the progress of above given Feature request.

Comment 1 Vikhyat Umrao 2016-04-12 07:03:29 UTC
- https://access.redhat.com/documentation/en/red-hat-ceph-storage/version-1.3/administration-guide/#user_management

- With current implementation user management happens at *pool* level customer needs user management to happen at client host and RBD image level.

Comment 2 Vikhyat Umrao 2016-04-12 07:11:17 UTC
Ceph Upstream Tracker : http://tracker.ceph.com/issues/15468

Comment 5 Jason Dillaman 2017-07-11 15:03:52 UTC
Possible future solution would be to add support for RBD namespaces and per-tenant access caps to restrict access to pool/namespace combos.

Comment 8 Drew Harris 2019-01-30 14:50:16 UTC
I have closed this issue because it has been inactive for some time now. If you feel this still deserves attention feel free to reopen it.


Note You need to log in before you can comment on or make changes to this bug.