Description of problem: idmapd.conf allows controlling the NFSv4.x server side id mapping settings such as adding a "Domain" or setting the id squashing preferences for specific users or groups such as the "Nobody-User" or the "Nobody-Group". ceph-ansible has the ability to customize setting a custom idmapd.conf file or applying specific overrides [1]. [1] https://github.com/ceph/ceph-ansible/commit/2db2208e406df83806c264207e7df90623add154 Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Hi Teoman ONAY, Could you please help with the steps to verify this BZ?
Verified this BZ with # ceph --version ceph version 18.2.1-89.el9cp (926619fe7135cbd6d305b46782ee7ecc7be199a3) reef (stable) # rpm -qa | grep nfs libnfsidmap-2.5.4-20.el9.x86_64 nfs-utils-2.5.4-20.el9.x86_64 nfs-ganesha-selinux-5.7-2.el9cp.noarch nfs-ganesha-5.7-2.el9cp.x86_64 nfs-ganesha-rgw-5.7-2.el9cp.x86_64 nfs-ganesha-ceph-5.7-2.el9cp.x86_64 nfs-ganesha-rados-grace-5.7-2.el9cp.x86_64 nfs-ganesha-rados-urls-5.7-2.el9cp.x86_64 Scenario 1: Mentioning idmap_conf section in the spec ===================================== [root@cali013 ~]# vi nfs.yaml [root@cali013 ~]# cat nfs.yaml networks: - 10.8.128.0/21 service_type: nfs service_id: nfsganesha placement: hosts: - cali013 - cali015 - cali016 spec: idmap_conf: general: local-realms: domain.org mapping: nobody-group: nfsnobody nobody-user: nfsnobody [root@cali013 ~]# cephadm shell --mount nfs.yaml:/var/lib/ceph/nfs.yaml Inferring fsid 4e687a60-638e-11ee-8772-b49691cee574 Inferring config /var/lib/ceph/4e687a60-638e-11ee-8772-b49691cee574/mon.cali013/config Using ceph image with id '2abcbe3816d6' and tag 'ceph-7.1-rhel-9-containers-candidate-63457-20240326021251' created on 2024-03-26 02:15:29 +0000 UTC registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:358fc7e11068221bbe1a0172e0f056bfd47cf7f1a983bbb8d6d238d3be21f5eb [ceph: root@cali013 /]# ceph orch apply -i /var/lib/ceph/nfs.yaml Scheduled nfs.nfsganesha update... [ceph: root@cali013 /]# ceph nfs cluster info nfsganesha { "nfsganesha": { "backend": [ { "hostname": "cali013", "ip": "10.8.130.13", "port": 2049 }, { "hostname": "cali015", "ip": "10.8.130.15", "port": 2049 }, { "hostname": "cali016", "ip": "10.8.130.16", "port": 2049 } ], "monitor_port": 8999, "port": 20490, "virtual_ip": "10.8.130.236" } } # ceph orch ps | grep nfs nfs.nfsganesha.0.0.cali013.krjtme cali013 10.8.130.13:2049 running (2m) 2m ago 2m 50.1M - 5.7 2abcbe3816d6 23991ee8490f nfs.nfsganesha.1.0.cali015.xysrog cali015 10.8.130.15:2049 running (2m) 117s ago 2m 52.2M - 5.7 2abcbe3816d6 df822f56f69c nfs.nfsganesha.2.0.cali016.ymcspl cali016 10.8.130.16:2049 running (2m) 2m ago 2m 49.9M - 5.7 2abcbe3816d6 4b606b0d2d7e Login into the NFS container on all nodes running NFS and check the content of /etc/ganesha/idmap.conf ----------------------------------------------------------------------------- [root@cali013 ~]# podman exec -it 23991ee8490f /bin/bash [root@cali013 /]# cat /etc/ganesha/idmap.conf [general] local-realms = domain.org [mapping] nobody-group = nfsnobody nobody-user = nfsnobody [root@cali013 /]# Scenario 2: Now deploy the spec file with idmap_conf section NOT specified in the spec ================================ [root@cali013 ~]# cat nfs.yaml networks: - 10.8.128.0/21 service_type: nfs service_id: nfsganesha placement: hosts: - cali013 - cali015 - cali016 [root@cali013 ~]# cephadm shell --mount nfs.yaml:/var/lib/ceph/nfs.yaml Inferring fsid 4e687a60-638e-11ee-8772-b49691cee574 Inferring config /var/lib/ceph/4e687a60-638e-11ee-8772-b49691cee574/mon.cali013/config Using ceph image with id '2abcbe3816d6' and tag 'ceph-7.1-rhel-9-containers-candidate-63457-20240326021251' created on 2024-03-26 02:15:29 +0000 UTC registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:358fc7e11068221bbe1a0172e0f056bfd47cf7f1a983bbb8d6d238d3be21f5eb [ceph: root@cali013 /]# ceph orch apply -i /var/lib/ceph/nfs.yaml Scheduled nfs.nfsganesha update... Now check the /etc/ganesha/idmap.conf if exit --> the file exist but is empty [root@cali013 ~]# podman ps | grep nfs 838924b3beed registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:0fc55c27d465d2852115a20f7d7f596edb3efde433c246c218355e71772dbc3e -F -L STDERR -N N... About a minute ago Up About a minute ceph-4e687a60-638e-11ee-8772-b49691cee574-nfs-nfsganesha-0-0-cali013-optqjq [root@cali013 ~]# podman exec -it 838924b3beed /bin/bash [root@cali013 /]# cat /etc/ganesha/idmap.conf [root@cali013 /]# Based on the comment #17 and this comment, moving this BZ to verified state
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:3925