Bug 1542868
| Summary: | Delete PV failed after a Pod using the PV in different namespace from local-storage-provisioner | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Qin Ping <piqin> |
| Component: | Storage | Assignee: | Jan Safranek <jsafrane> |
| Status: | CLOSED ERRATA | QA Contact: | Qin Ping <piqin> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 3.9.0 | CC: | aos-bugs, aos-storage-staff, bchilds, jsafrane |
| Target Milestone: | --- | ||
| Target Release: | 3.9.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | No Doc Update | |
| Doc Text: |
undefined
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2018-03-28 14:26:32 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Damn SELinux :-( The provisioner needs to run with SELinux label that allows it to modify directories relabeled by Kubelet for "normal users". Documentation change: https://github.com/openshift/openshift-docs/pull/7633 Template change: https://github.com/openshift/origin/pull/18498 For OSE 3.7, following the doc on https://docs.openshift.com/container-platform/3.7/install_config/configuring_local.html#install-config-configuring-local You need to create local-storage hostpath and add selinux permission below. Otherwise the provisioning pods report "permission denied". chcon -Rt svirt_sandbox_file_t /mnt/local-storage Verify this issue in openshift v3.9.0-0.47.0 failed, daemonset can not run correctly.
# oc describe daemonset
Name: local-volume-provisioner
Selector: app=local-volume-provisioner
Node-Selector: <none>
Labels: app=local-volume-provisioner
Annotations: openshift.io/generated-by=OpenShiftNewApp
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Scheduled with Up-to-date Pods: 0
Number of Nodes Scheduled with Available Pods: 0
Number of Nodes Misscheduled: 0
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=local-volume-provisioner
Service Account: local-storage-admin
Containers:
provisioner:
Image: registry.reg-aws.openshift.com:443/openshift3/local-storage-provisioner:v3.9
Port: <none>
Environment:
MY_NODE_NAME: (v1:spec.nodeName)
MY_NAMESPACE: (v1:metadata.namespace)
VOLUME_CONFIG_NAME: local-volume-config
Mounts:
/etc/provisioner/config from provisioner-config (ro)
/mnt/local-storage from local-storage (rw)
Volumes:
local-storage:
Type: HostPath (bare host directory volume)
Path: /mnt/local-storage
HostPathType:
provisioner-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: local-volume-config
Optional: false
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 9s (x12 over 19s) daemonset-controller Error creating: pods "local-volume-provisioner-" is forbidden: unable to validate against any security context constraint: [spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 0: must be in the ranges: [1000130000, 1000139999] spec.containers[0].securityContext.seLinuxOptions.level: Invalid value: "s0:c0.c1023": must be s0:c11,c10 spec.containers[0].securityContext.seLinuxOptions.level: Invalid value: "s0:c0.c1023": must be s0:c11,c10]
# oc get project local-storage -o yaml
apiVersion: project.openshift.io/v1
kind: Project
metadata:
annotations:
openshift.io/description: ""
openshift.io/display-name: ""
openshift.io/requester: piqin
openshift.io/sa.scc.mcs: s0:c11,c10
openshift.io/sa.scc.supplemental-groups: 1000130000/10000
openshift.io/sa.scc.uid-range: 1000130000/10000
creationTimestamp: 2018-02-22T02:40:13Z
name: local-storage
resourceVersion: "7911"
selfLink: /apis/project.openshift.io/v1/projects/local-storage
uid: b53dbdd1-1779-11e8-8ad6-fa163ecf4998
spec:
finalizers:
- openshift.io/origin
- kubernetes
status:
phase: Active
The guide at https://github.com/openshift/ose/tree/master/examples/storage-examples/local-examples got a new command that you missed: oc adm policy add-scc-to-user privileged -z local-storage-admin With that, local-storage-provisioner starts. Moving back to QA to test the original issue, i.e. deletion of PVCs in different namespaces. verified in openshift v3.9.0-0.47.0 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0489 |
Description of problem: A Pod in a different namespace from local-storage-provisioner used a PV created by local-storage-provisoiner. After using the PV, delete the Pod and PVC, local-storage-provisioner delete PV failed. Version-Release number of selected component (if applicable): openshift v3.9.0-0.39.0 kubernetes v1.9.1+a0ce1bc657 How reproducible: Always Steps to Reproduce: 1. Create mountDir on one node, and create a mountpoint under mountDir 2. using image: registry.reg-aws.openshift.com:443/openshift3/local-storage-provisioner:v3.9.0-0.39.0 to create local storage provisoiner under namespace local-storage 3. A PV is created 4. Create a PVC bound to the PV in namespace piqin 5. Create a Pod using the PVC 6. Write data to the local volume 7. Delete Pod and PVC 8. Check PV status Actual results: PV's status is "Released" and no new PV was created. Expected results: PV was delete and a new PV was created. Additional Info: $ oc get project local-storage -o yaml apiVersion: v1 kind: Project metadata: annotations: openshift.io/description: "" openshift.io/display-name: "" openshift.io/requester: piqin openshift.io/sa.scc.mcs: s0:c12,c4 openshift.io/sa.scc.supplemental-groups: 1000140000/10000 openshift.io/sa.scc.uid-range: 1000140000/10000 creationTimestamp: 2018-02-07T03:34:43Z name: local-storage resourceVersion: "13564" selfLink: /oapi/v1/projects/local-storage uid: d61bdd7a-0bb7-11e8-845e-fa163e06fc03 spec: finalizers: - openshift.io/origin - kubernetes status: phase: Active Before create Pod: [root@host-172-16-120-50 ~]# ls -lZ /mnt/local-storage/slow/ drwxrwxrwt. root root unconfined_u:object_r:svirt_sandbox_file_t:s0 vol1 After create Pod: # ls -lZ /mnt/local-storage/slow/ drwxrwsrwt. root 1000150000 system_u:object_r:svirt_sandbox_file_t:s0:c9,c12 vol1 $ oc describe pv local-pv-416096af --config=../../admin.kubeconfig Name: local-pv-416096af Labels: <none> Annotations: pv.kubernetes.io/bound-by-controller=yes pv.kubernetes.io/provisioned-by=local-volume-provisioner-172.16.120.50-05d8c344-0bae-11e8-bc43-fa163e06fc03 volume.alpha.kubernetes.io/node-affinity={"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"kubernetes.io/hostname","operator":"In","values":["172.16.... StorageClass: local-slow Status: Released Claim: piqin/pvc-test Reclaim Policy: Delete Access Modes: RWO VolumeMode: Filesystem Capacity: 1940836Ki Message: Source: Type: LocalVolume (a persistent volume backed by local storage on a node) Path: /mnt/local-storage/slow/vol1 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning VolumeFailedDelete 9s (x5 over 49s) local-volume-provisioner-172.16.120.50-05d8c344-0bae-11e8-bc43-fa163e06fc03 Error cleaning PV "local-pv-416096af": open /mnt/local-storage/slow/vol1: permission denied