Description of problem: Restic does not appear to respect the supplementalgroups of a namespace (https://docs.openshift.com/container-platform/3.11/install_config/persistent_storage/pod_security_context.html#supplemental-groups) After changing permissions on NFS side, can run stage with copy successfully, but should not be required as supplementalgroup is set on the nfs and the stage pod is respecting it. Fails with the following error: backup=openshift-migration/<backup_id> controller=pod-volume-backup error="fork/exec /usr/bin/restic: permission denied" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280" error.function="github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup" logSource="pkg/controller/pod_volume_backup_controller.go:280" name=<backup_id> namespace=openshift-migration
*** Bug 1874215 has been marked as a duplicate of this bug. ***
I have submitted a PR which simply allows a user to provide a comma separated list under the migrationcontroller resource: spec: ... restic_supplemental_groups: - 5555 - 6666 And these gids will be added to each restic pod's supplementalGroups field under securityContext. https://github.com/konveyor/mig-operator/pull/442 Andreas, would you please take a look at the attached PR and confirm this would solve the customer's use case? Please notice the context of the shell within the pod. I have tested this in my env but I'm unsure if it exactly mirrors the customer's use case with NFS. I believe it should.
Verified in MTC 1.3 openshift-migration-rhel7-operator@sha256:233af9517407e792bbb34c58558346f2424b8b0ab54be6f12f9f97513e391a6a - name: MIG_CONTROLLER_REPO value: openshift-migration-controller-rhel8@sha256 - name: MIG_CONTROLLER_TAG value: d58cccd15cc61be039cd1c8dae9584132dbd59095faf4f4f027fdb05d1860bdb - name: MIG_UI_REPO value: openshift-migration-ui-rhel8@sha256 - name: MIG_UI_TAG value: f306de1051cd2029944b2aa9511626b1dce365317fd04168478f14a43ad95e44 - name: MIGRATION_REGISTRY_REPO value: openshift-migration-registry-rhel8@sha256 - name: MIGRATION_REGISTRY_TAG value: 3b4a26983053bccc548bc106bdfc0f651075301b90572a03d9d31d62a6c3d769 - name: VELERO_REPO value: openshift-migration-velero-rhel8@sha256 - name: VELERO_TAG value: f844d84dd85f8ae75dc651ca7dd206463f4a10167417f8d6c8793c01c9b72152
I would like to stress that even if the migration works after the warning, we need to be aware that the user and group are lost in the migrated files. My application had no problem with this, but there could be applications that can have problems because of this. Before migration: # ls -larth pv1 total 12K drwxrwxrwx. 52 root root 4.0K Sep 17 13:43 .. drwxrwxrwx. 2 333 1000680001 41 Sep 17 14:17 . -rw-rw-r--. 1 333 1000680001 1.7K Sep 17 14:20 error.log -rw-rw-r--. 1 333 1000680001 831 Sep 17 15:31 access.log After migration # ls -larth pv33 total 12K -rw-rw-r--. 1 nfsnobody nfsnobody 1.7K Sep 17 14:20 error.log -rw-rw-r--. 1 nfsnobody nfsnobody 831 Sep 17 15:31 access.log
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Migration Toolkit for Containers (MTC) Tool image release advisory 1.3.0), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4148