Bug 2182943
Summary: | [GSS] [Tracker for Ceph https://bugzilla.redhat.com/show_bug.cgi?id=2189936] FSGroup is not correctly set on subPath volume for CephFS CSI | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Chen <cchen> | |
Component: | ceph | Assignee: | Venky Shankar <vshankar> | |
ceph sub component: | CephFS | QA Contact: | Shivam Durgbuns <sdurgbun> | |
Status: | CLOSED ERRATA | Docs Contact: | ||
Severity: | high | |||
Priority: | unspecified | CC: | bkunal, bniver, cgaynor, deslimi, dmoessne, etamir, fsimonce, hekumar, jclaretm, kbg, khiremat, kramdoss, muagarwa, ocs-bugs, odf-bz-bot, rar, scollier, sheggodu, sostapov, tdesala, vshankar, xiubli | |
Version: | 4.12 | Keywords: | Reopened | |
Target Milestone: | --- | Flags: | khiremat:
needinfo-
|
|
Target Release: | ODF 4.12.11 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | 4.12.4-1 | Doc Type: | If docs needed, set a value | |
Doc Text: |
Previously, when a sub-directory was created, it would always use its parent’s non-projected `gid`/`uid` metadata to set up its own `gid`/`uid` metadata. If the journal logs were not flushed, it would always retrieve the old `gid`/`uid` metadata.
With this fix, sub-directory uses the projected `gid`/`uid` metadata and as a result, the sub-directories inherit the correct `gid`/`uid` metadata from its parent.
|
Story Points: | --- | |
Clone Of: | ||||
: | 2189936 (view as bug list) | Environment: | ||
Last Closed: | 2023-12-26 01:37:16 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 2189936 |
Description
Chen
2023-03-30 01:58:39 UTC
sh-4.4# ./test.sh ++ grep mon_host /etc/ceph/ceph.conf ++ awk '{print $3}' + mon_endpoints=172.30.227.108:6789,172.30.40.75:6789,172.30.114.250:6789 ++ awk '{print $3}' ++ grep key /etc/ceph/keyring + my_secret=AQDHjD9kDgR6AhAAEdlX3qO3tb2PZqx/4USf5g== + for i in 1 2 + ceph fs subvolume create ocs-storagecluster-cephfilesystem test1 csi ++ ceph fs subvolume getpath ocs-storagecluster-cephfilesystem test1 csi + path=/volumes/csi/test1/492940a4-ccbc-4c9d-a6b1-43a490efab8e + mkdir -p /tmp/registry1 + ceph-fuse /tmp/registry1 -m=172.30.227.108:6789 --key=AQDHjD9kDgR6AhAAEdlX3qO3tb2PZqx/4USf5g== -n=client.admin -r /volumes/csi/test1/492940a4-ccbc-4c9d-a6b1-43a490efab8e -o nonempty --client_mds_namespace=ocs-storagecluster-cephfilesystem 2023-04-19T07:59:08.086+0000 7f3f63fbd540 -1 init, newargv = 0x56213485a800 newargc=17 ceph-fuse[1970]: starting ceph client ceph-fuse[1970]: starting fuse + chgrp 9999 /tmp/registry1 + chmod g+s,a+rwx /tmp/registry1 + sleep 5 + mkdir -p /tmp/registry1/a + mkdir -p /tmp/registry1/b + mkdir -p /tmp/registry1/b/x + ls -lrt /tmp/registry1/b/ total 1 drwxrwsrwx. 2 root 9999 0 Apr 19 07:59 x + mkdir /tmp/registry1/c + mkdir /tmp/registry1/d + ls -lrt /tmp/registry1 total 2 drwxrwsrwx. 2 root 9999 0 Apr 19 07:59 a drwxrwsrwx. 3 root 9999 0 Apr 19 07:59 b drwxrwsrwx. 2 root 9999 0 Apr 19 07:59 c drwxrwsrwx. 2 root 9999 0 Apr 19 07:59 d + umount /tmp/registry1 + rm -rf /tmp/registry1 + ceph fs subvolume rm ocs-storagecluster-cephfilesystem test1 csi + for i in 1 2 + ceph fs subvolume create ocs-storagecluster-cephfilesystem test2 csi ++ ceph fs subvolume getpath ocs-storagecluster-cephfilesystem test2 csi + path=/volumes/csi/test2/9cbc323e-9993-454c-8746-1b2bc4ace5c3 + mkdir -p /tmp/registry2 + ceph-fuse /tmp/registry2 -m=172.30.227.108:6789 --key=AQDHjD9kDgR6AhAAEdlX3qO3tb2PZqx/4USf5g== -n=client.admin -r /volumes/csi/test2/9cbc323e-9993-454c-8746-1b2bc4ace5c3 -o nonempty --client_mds_namespace=ocs-storagecluster-cephfilesystem 2023-04-19T07:59:14.063+0000 7ff6c34c3540 -1 init, newargv = 0x56163def7800 newargc=17 ceph-fuse[2093]: starting ceph client ceph-fuse[2093]: starting fuse + chgrp 9999 /tmp/registry2 + chmod g+s,a+rwx /tmp/registry2 + sleep 5 + mkdir -p /tmp/registry2/a + mkdir -p /tmp/registry2/b + mkdir -p /tmp/registry2/b/x + ls -lrt /tmp/registry2/b/ total 1 drwxrwsrwx. 2 root 9999 0 Apr 19 07:59 x + mkdir /tmp/registry2/c + mkdir /tmp/registry2/d + ls -lrt /tmp/registry2 total 2 drwxrwsrwx. 2 root 9999 0 Apr 19 07:59 a drwxrwsrwx. 3 root 9999 0 Apr 19 07:59 b drwxrwsrwx. 2 root 9999 0 Apr 19 07:59 c drwxrwsrwx. 2 root 9999 0 Apr 19 07:59 d + umount /tmp/registry2 + rm -rf /tmp/registry2 + ceph fs subvolume rm ocs-storagecluster-cephfilesystem test2 csi sh-4.4# sh-4.4# sh-4.4# sh-4.4# vi test.sh .bash_logout .bash_profile .bashrc .cshrc .tcshrc anaconda-ks.cfg anaconda-post.log original-ks.cfg test.sh sh-4.4# vi test.sh sh-4.4# ./test.sh ++ grep mon_host /etc/ceph/ceph.conf ++ awk '{print $3}' + mon_endpoints=172.30.227.108:6789,172.30.40.75:6789,172.30.114.250:6789 ++ grep key /etc/ceph/keyring ++ awk '{print $3}' + my_secret=AQDHjD9kDgR6AhAAEdlX3qO3tb2PZqx/4USf5g== + for i in 1 2 + ceph fs subvolume create ocs-storagecluster-cephfilesystem test1 csi ++ ceph fs subvolume getpath ocs-storagecluster-cephfilesystem test1 csi + path=/volumes/csi/test1/1251b28f-ca6a-4b0b-860f-ee3ebdbad933 + mkdir -p /tmp/registry1 + mount -t ceph -o mds_namespace=ocs-storagecluster-cephfilesystem,name=admin,secret=AQDHjD9kDgR6AhAAEdlX3qO3tb2PZqx/4USf5g== 172.30.227.108:6789,172.30.40.75:6789,172.30.114.250:6789://volumes/csi/test1/1251b28f-ca6a-4b0b-860f-ee3ebdbad933 /tmp/registry1 + chgrp 9999 /tmp/registry1 + chmod g+s,a+rwx /tmp/registry1 + sleep 5 + mkdir -p /tmp/registry1/a + mkdir -p /tmp/registry1/b + mkdir -p /tmp/registry1/b/x + ls -lrt /tmp/registry1/b/ total 0 drwxrwsrwx. 2 root 9999 0 Apr 19 08:00 x + mkdir /tmp/registry1/c + mkdir /tmp/registry1/d + ls -lrt /tmp/registry1 total 0 drwxrwsrwx. 2 root 9999 0 Apr 19 08:00 a drwxrwsrwx. 3 root 9999 1 Apr 19 08:00 b drwxrwsrwx. 2 root 9999 0 Apr 19 08:00 c drwxrwsrwx. 2 root 9999 0 Apr 19 08:00 d + umount /tmp/registry1 + rm -rf /tmp/registry1 + ceph fs subvolume rm ocs-storagecluster-cephfilesystem test1 csi + for i in 1 2 + ceph fs subvolume create ocs-storagecluster-cephfilesystem test2 csi ++ ceph fs subvolume getpath ocs-storagecluster-cephfilesystem test2 csi + path=/volumes/csi/test2/884662eb-86d9-421f-b853-d008334ae93b + mkdir -p /tmp/registry2 + mount -t ceph -o mds_namespace=ocs-storagecluster-cephfilesystem,name=admin,secret=AQDHjD9kDgR6AhAAEdlX3qO3tb2PZqx/4USf5g== 172.30.227.108:6789,172.30.40.75:6789,172.30.114.250:6789://volumes/csi/test2/884662eb-86d9-421f-b853-d008334ae93b /tmp/registry2 + chgrp 9999 /tmp/registry2 + chmod g+s,a+rwx /tmp/registry2 + sleep 5 + mkdir -p /tmp/registry2/a + mkdir -p /tmp/registry2/b + mkdir -p /tmp/registry2/b/x + ls -lrt /tmp/registry2/b/ total 0 drwxrwsrwx. 2 root 9999 0 Apr 19 08:00 x + mkdir /tmp/registry2/c + mkdir /tmp/registry2/d + ls -lrt /tmp/registry2 total 0 drwxrwsrwx. 2 root 9999 0 Apr 19 08:00 a drwxrwsrwx. 3 root 9999 1 Apr 19 08:00 b drwxrwsrwx. 2 root 9999 0 Apr 19 08:00 c drwxrwsrwx. 2 root 9999 0 Apr 19 08:00 d + umount /tmp/registry2 + rm -rf /tmp/registry2 + ceph fs subvolume rm ocs-storagecluster-cephfilesystem test2 csi sh-4.4# -------------------------------------------- Note if i put a 5 seconds delay between chmod and mkdir of 1st directory looks like right permission are set. not sure it matters but pasting here for reference Verified! able to see expected result in FS Job: https://ocs4-jenkins-csb-odf-qe.apps.ocp-c1.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/25149/ [sdurgbun auth]$ oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE local-pvc-name Bound pvc-f35bc2e7-ed86-4b25-b524-b071df9b8c2d 1Gi RWO ocs-storagecluster-cephfs 15s [sdurgbun auth]$ oc get pods NAME READY STATUS RESTARTS AGE rhel7 0/1 ContainerCreating 0 68s [sdurgbun auth]$ oc get pods NAME READY STATUS RESTARTS AGE rhel7 1/1 Running 0 95s [sdurgbun auth]$ oc rsh rhel7 sh-4.2$ ls -l /etc/healing-controller.d/ total 0 drwxrwsr-x. 2 root 9999 0 Jun 1 10:26 critical-containers-logs drwxrwsr-x. 2 root 9999 0 Jun 1 10:26 record sh-4.2$ sh-4.2$ exit [sdurgbun auth]$ oc get csv --show-labels No resources found in testbz namespace. [sdurgbun auth]$ oc get csv --namespace openshift-storage --show-labels NAME DISPLAY VERSION REPLACES PHASE LABELS mcg-operator.v4.12.4-rhodf NooBaa Operator 4.12.4-rhodf mcg-operator.v4.12.3-rhodf Succeeded operators.coreos.com/mcg-operator.openshift-storage= ocs-operator.v4.12.4-rhodf OpenShift Container Storage 4.12.4-rhodf ocs-operator.v4.12.3-rhodf Succeeded full_version=4.12.4-1,operatorframework.io/arch.amd64=supported,operatorframework.io/arch.ppc64le=supported,operatorframework.io/arch.s390x=supported,operators.coreos.com/ocs-operator.openshift-storage= odf-csi-addons-operator.v4.12.4-rhodf CSI Addons 4.12.4-rhodf odf-csi-addons-operator.v4.12.3-rhodf Succeeded operators.coreos.com/odf-csi-addons-operator.openshift-storage= odf-operator.v4.12.4-rhodf OpenShift Data Foundation 4.12.4-rhodf odf-operator.v4.12.3-rhodf Succeeded full_version=4.12.4-1,operatorframework.io/arch.amd64=supported,operatorframework.io/arch.ppc64le=supported,operatorframework.io/arch.s390x=supported,operators.coreos.com/odf-operator.openshift-storage= [sdurgbun auth]$ cat test-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: local-pvc-name namespace: testbz spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: ocs-storagecluster-cephfs volumeMode: Filesystem [sdurgbun auth]$ cat test-pod.yaml apiVersion: v1 kind: Pod metadata: name: rhel7 labels: app: rhel7 spec: containers: - name: myapp-container image: registry.access.redhat.com/ubi7/ubi command: ['sh', '-c', 'mkdir /etc/healing-controller.d -p && echo The app is running! && sleep 3600'] securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL readOnlyRootFilesystem: true runAsNonRoot: true seLinuxOptions: level: s0 volumeMounts: - mountPath: /etc/healing-controller.d/record name: local-disks subPath: record - mountPath: /etc/healing-controller.d/critical-containers-logs name: local-disks subPath: critical-containers-logs volumes: - name: local-disks persistentVolumeClaim: claimName: local-pvc-name securityContext: fsGroup: 9999 runAsGroup: 9999 runAsUser: 9999 [sdurgbun auth]$ Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat OpenShift Data Foundation 4.12.4 security and Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:3609 |