This bug was initially created as a copy of Bug #1917356 I am copying this bug because: Fixing the RHOSP BZ from which this is cloned requires a change in ceph-ansible. We are targeting the OSP fix to OSP16.z so RHCS4 and ceph-ansible are relevant rather than cephadm and RHCS 5. Description of problem: while creating a share in manila; we can see that ganesha export template contains Squash = None; although after mounting the share on client; we can see that any file created by the root user on the client is owned by the nobody user: create a share in manila; allow access to an ip address: +++ (overcloud) [stack@undercloud16 ~]$ manila access-allow share2 ip 172.16.70.21 +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | id | dbdb9211-d85e-49c7-9174-caecc95deac0 | | share_id | ae0f39b9-f1dd-425d-8239-9a727fdb01ed | | access_level | rw | | access_to | 172.16.70.21 | | access_type | ip | | state | queued_to_apply | | access_key | None | | created_at | 2021-01-14T16:03:01.000000 | | updated_at | None | | metadata | {} | +--------------+--------------------------------------+ (overcloud) [stack@undercloud16 ~]$ manila show ae0f39b9-f1dd-425d-8239-9a727fdb01ed +---------------------------------------+---------------------------------------------------------------------------+ | Property | Value | +---------------------------------------+---------------------------------------------------------------------------+ | id | ae0f39b9-f1dd-425d-8239-9a727fdb01ed | | size | 5 | | availability_zone | nova | | created_at | 2021-01-14T16:01:58.000000 | | status | available | | name | share2 | | description | None | | project_id | 66fef91762434f83ae82beaab658a219 | | snapshot_id | None | | share_network_id | None | | share_proto | NFS | | metadata | {} | | share_type | 57e6d5fb-fd86-4c38-a493-6d220d3fc579 | | is_public | False | | snapshot_support | False | | task_state | None | | share_type_name | default | | access_rules_status | active | | replication_type | None | | has_replicas | False | | user_id | b4b5f09744c84ed8a20eeacb75d702dc | | create_share_from_snapshot_support | False | | revert_to_snapshot_support | False | | share_group_id | None | | source_share_group_snapshot_member_id | None | | mount_snapshot_support | False | | share_server_id | None | | host | hostgroup@cephfs#cephfs | | export_locations | | | | id = 56167427-10ff-4f4c-a02a-d39916b4010d | | | path = 172.16.70.9:/volumes/_nogroup/452a7159-d1d7-4b19-89e8-801eb17619c9 | | | preferred = False | | | share_instance_id = 452a7159-d1d7-4b19-89e8-801eb17619c9 | | | is_admin_only = False | +---------------------------------------+---------------------------------------------------------------------------+ +++ from the logs we can see that manila adds a export template: +++ 2021-01-14 16:03:03.854 43 DEBUG ceph_volume_client [req-878e689c-1a45-4774-877b-6b9fd4377ba8 b4b5f09744c84ed8a20eeacb75d702dc 66fef91762434f83ae82beaab658a219 - - -] Authorizing Ceph id 'ganesha-452a7159-d1d7-4b19-89e8-801eb17619c9' for path '/volumes/_nogroup/452a7159-d1d7-4b19-89e8-801eb17619c9' _authorize_ceph /usr/lib/python3.6/site-packages/ceph_volume_client.py:1074 2021-01-14 16:03:04.332 43 DEBUG oslo_concurrency.processutils [req-878e689c-1a45-4774-877b-6b9fd4377ba8 b4b5f09744c84ed8a20eeacb75d702dc 66fef91762434f83ae82beaab658a219 - - -] Running cmd (subprocess): mktemp -p /etc/ganesha/export.d -t share-452a7159-d1d7-4b19-89e8-801eb17619c9.conf.XXXXXX execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2021-01-14 16:03:04.371 43 DEBUG oslo_concurrency.processutils [req-878e689c-1a45-4774-877b-6b9fd4377ba8 b4b5f09744c84ed8a20eeacb75d702dc 66fef91762434f83ae82beaab658a219 - - -] CMD "mktemp -p /etc/ganesha/export.d -t share-452a7159-d1d7-4b19-89e8-801eb17619c9.conf.XXXXXX" returned: 0 in 0.039s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409 2021-01-14 16:03:04.377 43 DEBUG oslo_concurrency.processutils [req-878e689c-1a45-4774-877b-6b9fd4377ba8 b4b5f09744c84ed8a20eeacb75d702dc 66fef91762434f83ae82beaab658a219 - - -] Running cmd (subprocess): sh -c echo 'EXPORT { Export_Id = 1003; Path = "/volumes/_nogroup/452a7159-d1d7-4b19-89e8-801eb17619c9"; FSAL { Name = "Ceph"; User_Id = "ganesha-452a7159-d1d7-4b19-89e8-801eb17619c9"; Secret_Access_Key = "AQA3awBgBY1zMxAAkXeV3zDcsZyoy+AfFuzXjw=="; } Pseudo = "/volumes/_nogroup/452a7159-d1d7-4b19-89e8-801eb17619c9"; SecType = "sys"; Tag = "share-452a7159-d1d7-4b19-89e8-801eb17619c9"; CLIENT { Access_Type = "rw"; Clients = 172.16.70.21; } Squash = "None"; } ' > /etc/ganesha/export.d/share-452a7159-d1d7-4b19-89e8-801eb17619c9.conf.gkHhtg execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372 2021-01-14 16:03:04.402 43 DEBUG oslo_concurrency.processutils [req-878e689c-1a45-4774-877b-6b9fd4377ba8 b4b5f09744c84ed8a20eeacb75d702dc 66fef91762434f83ae82beaab658a219 - - -] CMD "sh -c echo 'EXPORT { Export_Id = 1003; Path = "/volumes/_nogroup/452a7159-d1d7-4b19-89e8-801eb17619c9"; FSAL { Name = "Ceph"; User_Id = "ganesha-452a7159-d1d7-4b19-89e8-801eb17619c9"; Secret_Access_Key = "AQA3awBgBY1zMxAAkXeV3zDcsZyoy+AfFuzXjw=="; } Pseudo = "/volumes/_nogroup/452a7159-d1d7-4b19-89e8-801eb17619c9"; SecType = "sys"; Tag = "share-452a7159-d1d7-4b19-89e8-801eb17619c9"; CLIENT { Access_Type = "rw"; Clients = 172.16.70.21; } Squash = "None"; } +++ We can see that it shoudn't set root user to nobody on the client because the parameter is passed in the ganesha config +++ [root@localhost ~]# mount -v 172.16.70.9:/volumes/_nogroup/452a7159-d1d7-4b19-89e8-801eb17619c9 /mnt/ mount.nfs: timeout set for Thu Jan 14 11:08:17 2021 mount.nfs: trying text-based options 'vers=4.1,addr=172.16.70.9,clientaddr=172.16.70.21' [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 7.9G 945M 6.9G 12% / devtmpfs 897M 0 897M 0% /dev tmpfs 919M 0 919M 0% /dev/shm tmpfs 919M 17M 903M 2% /run tmpfs 919M 0 919M 0% /sys/fs/cgroup tmpfs 184M 0 184M 0% /run/user/0 172.16.70.9:/volumes/_nogroup/452a7159-d1d7-4b19-89e8-801eb17619c9 5.0G 0 5.0G 0% /mnt [root@localhost ~]# cd /mnt/ [root@localhost mnt]# ls -la total 1 drwxr-xr-x. 2 nobody nobody 0 Jan 14 11:01 . dr-xr-xr-x. 17 root root 224 Oct 10 2018 .. [root@localhost mnt]# touch test1 [root@localhost mnt]# ls -la total 1 drwxr-xr-x. 2 nobody nobody 0 Jan 14 11:06 . dr-xr-xr-x. 17 root root 224 Oct 10 2018 .. -rw-r--r--. 1 nobody nobody 0 Jan 14 11:06 test1 [root@localhost mnt]# +++ even if we try starting up the nfs-idmap service on the client and remount the share; it still has the same Version-Release number of selected component (if applicable): +++ [root@overcloud-controller-0 ~]# podman ps | grep -i manila b04b24c0ee4c undercloud16.ctlplane.rhlab2961.com:8787/rhosp-rhel8/openstack-manila-share:16.1 /bin/bash /usr/lo... 3 days ago Up 3 days ago openstack-manila-share-podman-0 f739fd54b19f undercloud16.ctlplane.rhlab2961.com:8787/rhosp-rhel8/openstack-manila-scheduler:16.1 kolla_start 5 days ago Up 5 days ago manila_scheduler d3bf4052367d undercloud16.ctlplane.rhlab2961.com:8787/rhosp-rhel8/openstack-manila-api:16.1 kolla_start 5 days ago Up 5 days ago manila_api [root@overcloud-controller-0 ~]# podman images | grep -i manila undercloud16.ctlplane.rhlab2961.com:8787/rhosp-rhel8/openstack-manila-api 16.1 6dc72f5e58bf 4 weeks ago 853 MB undercloud16.ctlplane.rhlab2961.com:8787/rhosp-rhel8/openstack-manila-share 16.1 83db86aa3a30 4 weeks ago 1.05 GB cluster.common.tag/openstack-manila-share pcmklatest 83db86aa3a30 4 weeks ago 1.05 GB undercloud16.ctlplane.rhlab2961.com:8787/rhosp-rhel8/openstack-manila-scheduler 16.1 431ea55ceaa1 4 weeks ago 801 MB [root@overcloud-controller-0 ~]# podman image inspect 83db86aa3a30 | grep -e version -e tag "cluster.common.tag/openstack-manila-share:pcmklatest" "cluster.common.tag/openstack-manila-share@sha256:50efd71a9e26dbe4aa6a7ba421ef47147babc8e79a696239855ed808fe4a042d", "io.openshift.tags": "rhosp osp openstack osp-16.1", "version": "16.1.3" "io.openshift.tags": "rhosp osp openstack osp-16.1", "version": "16.1.3" +++ How reproducible: Always Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2445