Bug 1304255
Summary: | cannot mount pv for pod in dedicated | ||
---|---|---|---|
Product: | OpenShift Online | Reporter: | Dongbo Yan <dyan> |
Component: | Storage | Assignee: | Matt Woodson <mwoodson> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Jianwei Hou <jhou> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 3.x | CC: | aos-bugs, dyan, jsafrane, sspeiche, whearn, xtian |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-05-23 15:10:58 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Dongbo Yan
2016-02-03 07:38:43 UTC
This turns out that the cluster PV wasn't properly configured. The bug is still failed. oc describe pod eap-app-postgresql-1-ok5z5 2 Name: eap-app-postgresql-1-ok5z5 3 Namespace: dyan 4 Image(s): registry.access.redhat.com/rhscl/postgresql-94-rhel7:latest 5 Node: ip-172-31-5-176.ec2.internal/172.31.5.176 6 Start Time: Mon, 22 Feb 2016 10:18:41 +0800 7 Labels: application=eap-app,deployment=eap-app-postgresql-1,deploymentConfig=eap-app-postgresql,deploymentconfig=eap-app-postgresql 8 Status: Pending 9 Reason: 10 Message: 11 IP: 12 Controllers: ReplicationController/eap-app-postgresql-1 13 Containers: 14 eap-app-postgresql: 15 Container ID: 16 Image: registry.access.redhat.com/rhscl/postgresql-94-rhel7:latest 17 Image ID: 18 QoS Tier: 19 memory: BestEffort 20 cpu: BestEffort 21 State: Waiting 22 Reason: ContainerCreating 23 Ready: False 24 Restart Count: 0 25 Environment Variables: 26 POSTGRESQL_USER: users8J 27 POSTGRESQL_PASSWORD: 0EIdIWgI 28 POSTGRESQL_DATABASE: root 29 POSTGRESQL_MAX_CONNECTIONS: 30 POSTGRESQL_SHARED_BUFFERS: 31 Conditions: 32 Type Status 33 Ready False 34 Volumes: 35 eap-app-postgresql-pvol: 36 Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) 37 ClaimName: eap-app-postgresql-claim 38 ReadOnly: false 39 default-token-3oyrd: 40 Type: Secret (a secret that should populate this volume) 41 SecretName: default-token-3oyrd 42 Events: 43 FirstSeen LastSeen Count From SubobjectPath Type Reason Message 44 --------- -------- ----- ---- ------------- -------- ------ ------- 45 20m 20m 1 {scheduler } Scheduled Successfully assigned eap-app-postgresql-1-ok5z5 to ip-172-31-5-176.ec2.internal 46 20m 9s 122 {kubelet ip-172-31-5-176.ec2.internal} FailedMount Unable to mount volumes for pod "eap-app-postgresql-1-ok5z5_dyan": unsupported volume type 47 20m 8s 122 {kubelet ip-172-31-5-176.ec2.internal} FailedSync Error syncing pod, skipping: unsupported volume type 48 49 ------------------add pvc info 50 oc describe pvc eap-app-postgresql-claim 51 Name: eap-app-postgresql-claim 52 Namespace: dyan 53 Status: Bound 54 Volume: pv-1-stage-master-7b5c2-vol-c1577d17 55 Labels: application=eap-app,template=eap64-postgresql-persistent-s2i,xpaas=1.2.0 56 Capacity: 1Gi 57 Access Modes: RWO It looks like misconfiguration again. Please provide definition of the pod, claims and persistent volumes, i.e.: as the user: # kubectl get pods -o yaml # kubectl get pvc -o yaml as admin: # kubectl get pv -o yaml Oh, and please also attach your node-config.yaml and master-config.yaml, thanks! Thanks for pv and pvc.yaml in bug #1313560. It looks like correct configuration. However, I would be very interested how did you get a volume that fails with "Error syncing pod, skipping: Cloud provider does not support volumes", as reported in comment #0. Can you still reproduce it? That must have been another volume, probably in different OpenShift installation, right? I can't reproduce it now. PV can be mounted normally and db pod status is always running. When I check the project which I reproduce on before, it output : persistentvolume "pv-1-stage-master-7b5c2-vol-c1577d17" not found The PV pv-1-stage-master-7b5c2-vol-c1577d17 is no longer found in the output of "oc get pv". The EBS volume (vol-c1577d17) does exist inside of the AWS account. Something/Someone removed the pv from the stage cluster. It is no longer found. I believe that the error originally found in this bug is unrelated to the PV not being found. QE verified this bug, PV can be mounted normally and db pod status is always running now. |