Bug 2209692 - RGW LDAP authentication as documented does not work due to containerization [NEEDINFO]
Summary: RGW LDAP authentication as documented does not work due to containerization
Keywords:
Status: NEW
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 6.0
Hardware: All
OS: All
unspecified
high
Target Milestone: ---
: 6.1z2
Assignee: Adam King
QA Contact: Mohit Bisht
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-05-24 14:10 UTC by Michaela Lang
Modified: 2023-08-03 12:17 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:
milang: needinfo? (adking)
milang: needinfo? (mbenjamin)


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-6733 0 None None None 2023-05-24 14:12:09 UTC

Description Michaela Lang 2023-05-24 14:10:36 UTC
Description of problem:
With Ceph 6, containerization of the components is mandatory as well as writing ceph.conf is deprecated.


Following our Documentation on how to integrate LDAP authentication with RGW we hit two issues:

- we write ceph.conf (https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/6/html-single/object_gateway_guide/index#configure-the-gateway-to-use-ldap-rgw) 
- we do not provide a method to update the containerized ceph.conf deployment through the operator

The problem we face is that ceph.conf in containerized environments is provided by the orchestrator and we do not provide a possibility to add `rgw_ldap*` related entries which are mandatory to get RGW authenticate accordingly.


Version-Release number of selected component (if applicable):
6.x


How reproducible:
always


Steps to Reproduce:
1. follow the procedure described at https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/6/html-single/object_gateway_guide/index#configure-ldap-and-ceph-object-gateway
2. after restarting the RGW instances LDAP authentication is not working due to missing `rgw_ldap*` entries in ceph.conf 
3. manually updating /var/lib/ceph/<fsid>/<rgw-instance-name>.<hostname>/config (mapped as ceph.conf) with the appropriate values and restarting the instance provides LDAP authentication functionality

Actual results:
- 403 Authentication required by any S3 client
- radosgw-admin user list does not return any LDAP user


Expected results:
- 200 by any authenticated S3 client 
- radosgw-admin user lists LDAP users in addition


Additional info:

Comment 2 Michaela Lang 2023-05-24 14:58:43 UTC
He Matt,

ceph config set does not write those values in the container config file unfortunately

```
ceph config set global rgw_ldap_uri 'ldaps://ldap.example.com:636'
ceph config set global rgw_ldap_binddn 'cn=Directory Manager'
ceph config set global rgw_ldap_secret '/etc/ceph/bindpass'
ceph config set global rgw_ldap_searchdn = 'ou=people,dc=example,dc=com'
ceph config set global rgw_ldap_searchdn 'ou=people,dc=example,dc=com'
ceph config set global rgw_ldap_dnattr 'uid'
ceph config set global rgw_s3_auth_use_ldap true

ceph orch redeploy rgw.default

# check on timestamp to see update
ls -l /var/lib/ceph/<fsid>/rgw.default.<hostname>/config
cat /var/lib/ceph/<fsid>/rgw.default.<hostname>/config

# minimal ceph.conf for 374c8ec2-ea6f-11ed-8bf4-525400db0518
[global]
	fsid = <fsid>
	mon_host = [v2:node1:3300/0,v1:node1:6789/0] [v2:node2:3300/0,v1:node2:6789/0] [v2:node3:3300/0,v1:node3:6789/0]
```

it wipe of course what was manually added in the config /var/lib/ceph/<fsid>/rgw.default.<hostname>/config

Comment 6 Scott Nipp 2023-07-27 16:27:21 UTC
I have 2 questions...  First, any thoughts on a backport for this issue into RHCS 5?

The other question is something I was playing around with related to a customer case.  The customer had used 'ceph config set' to get most of the LDAP parameters needed and these do persist across restarts of the RGW services.  His problem was the /etc/ceph/bindpass was not being deployed in the container and this was a blocker for him.  In playing around with this in a lab env, I was able to get the /etc/ceph/bindpass into the container via the following extra args in the rgw.yaml and then applying.  Not sure if this is helpful but thought I'd ask about it at least.

Adds to rgw.yaml below-

extra_container_args:
  - "-v"
  - "/etc/ceph/bindpass:/etc/ceph/bindpass:ro"

Then of course would be to apply and restart:
# ceph orch apply -i rgw.yaml 
# ceph orch restart rgw.rgwsvcid


Note You need to log in before you can comment on or make changes to this bug.