Bug 2209692

Summary: RGW LDAP authentication as documented does not work due to containerization
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Michaela Lang <milang>
Component: CephadmAssignee: Adam King <adking>
Status: NEW --- QA Contact: Mohit Bisht <mobisht>
Severity: high Docs Contact:
Priority: unspecified    
Version: 6.0CC: adking, ceph-eng-bugs, cephqe-warriors, mbenjamin, snipp
Target Milestone: ---Flags: milang: needinfo? (adking)
milang: needinfo? (mbenjamin)
Target Release: 6.1z2   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Michaela Lang 2023-05-24 14:10:36 UTC
Description of problem:
With Ceph 6, containerization of the components is mandatory as well as writing ceph.conf is deprecated.


Following our Documentation on how to integrate LDAP authentication with RGW we hit two issues:

- we write ceph.conf (https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/6/html-single/object_gateway_guide/index#configure-the-gateway-to-use-ldap-rgw) 
- we do not provide a method to update the containerized ceph.conf deployment through the operator

The problem we face is that ceph.conf in containerized environments is provided by the orchestrator and we do not provide a possibility to add `rgw_ldap*` related entries which are mandatory to get RGW authenticate accordingly.


Version-Release number of selected component (if applicable):
6.x


How reproducible:
always


Steps to Reproduce:
1. follow the procedure described at https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/6/html-single/object_gateway_guide/index#configure-ldap-and-ceph-object-gateway
2. after restarting the RGW instances LDAP authentication is not working due to missing `rgw_ldap*` entries in ceph.conf 
3. manually updating /var/lib/ceph/<fsid>/<rgw-instance-name>.<hostname>/config (mapped as ceph.conf) with the appropriate values and restarting the instance provides LDAP authentication functionality

Actual results:
- 403 Authentication required by any S3 client
- radosgw-admin user list does not return any LDAP user


Expected results:
- 200 by any authenticated S3 client 
- radosgw-admin user lists LDAP users in addition


Additional info:

Comment 2 Michaela Lang 2023-05-24 14:58:43 UTC
He Matt,

ceph config set does not write those values in the container config file unfortunately

```
ceph config set global rgw_ldap_uri 'ldaps://ldap.example.com:636'
ceph config set global rgw_ldap_binddn 'cn=Directory Manager'
ceph config set global rgw_ldap_secret '/etc/ceph/bindpass'
ceph config set global rgw_ldap_searchdn = 'ou=people,dc=example,dc=com'
ceph config set global rgw_ldap_searchdn 'ou=people,dc=example,dc=com'
ceph config set global rgw_ldap_dnattr 'uid'
ceph config set global rgw_s3_auth_use_ldap true

ceph orch redeploy rgw.default

# check on timestamp to see update
ls -l /var/lib/ceph/<fsid>/rgw.default.<hostname>/config
cat /var/lib/ceph/<fsid>/rgw.default.<hostname>/config

# minimal ceph.conf for 374c8ec2-ea6f-11ed-8bf4-525400db0518
[global]
	fsid = <fsid>
	mon_host = [v2:node1:3300/0,v1:node1:6789/0] [v2:node2:3300/0,v1:node2:6789/0] [v2:node3:3300/0,v1:node3:6789/0]
```

it wipe of course what was manually added in the config /var/lib/ceph/<fsid>/rgw.default.<hostname>/config

Comment 6 Scott Nipp 2023-07-27 16:27:21 UTC
I have 2 questions...  First, any thoughts on a backport for this issue into RHCS 5?

The other question is something I was playing around with related to a customer case.  The customer had used 'ceph config set' to get most of the LDAP parameters needed and these do persist across restarts of the RGW services.  His problem was the /etc/ceph/bindpass was not being deployed in the container and this was a blocker for him.  In playing around with this in a lab env, I was able to get the /etc/ceph/bindpass into the container via the following extra args in the rgw.yaml and then applying.  Not sure if this is helpful but thought I'd ask about it at least.

Adds to rgw.yaml below-

extra_container_args:
  - "-v"
  - "/etc/ceph/bindpass:/etc/ceph/bindpass:ro"

Then of course would be to apply and restart:
# ceph orch apply -i rgw.yaml 
# ceph orch restart rgw.rgwsvcid