Bug 1304255 - cannot mount pv for pod in dedicated
Summary: cannot mount pv for pod in dedicated
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Online
Classification: Red Hat
Component: Storage
Version: 3.x
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Matt Woodson
QA Contact: Jianwei Hou
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-02-03 07:38 UTC by Dongbo Yan
Modified: 2016-05-23 15:10 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-05-23 15:10:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Dongbo Yan 2016-02-03 07:38:43 UTC
Description of problem:
cannot mount pv for pod in dedicated

Version-Release number of selected component (if applicable):
oc v3.1.1.6
kubernetes v1.1.0-origin-1107-g4c8e6f4

How reproducible:
Always

Steps to Reproduce:
1.create a project in dedicated
2.create app from webconsole using template : eap64-postgresql-persistent-s2i
3.after the  deploy complete ,check the db pod status

Actual results:
the db pod status is ContainerCreating

Expected results:
the db pod status is running

Additional info:
oc describe pod eap-app-postgresql-2-95pi1
Name:				eap-app-postgresql-2-95pi1
Namespace:			dyan
Image(s):			registry.access.redhat.com/rhscl/postgresql-94-rhel7:latest
Node:				ip-172-31-5-178.ec2.internal/172.31.5.178
Start Time:			Wed, 03 Feb 2016 14:47:21 +0800
Labels:				application=eap-app,deployment=eap-app-postgresql-2,deploymentConfig=eap-app-postgresql,deploymentconfig=eap-app-postgresql
Status:				Pending
Reason:				
Message:			
IP:				
Replication Controllers:	eap-app-postgresql-2 (1/1 replicas created)
Containers:
  eap-app-postgresql:
    Container ID:	
    Image:		registry.access.redhat.com/rhscl/postgresql-94-rhel7:latest
    Image ID:		
    QoS Tier:
      cpu:		BestEffort
      memory:		BestEffort
    State:		Waiting
      Reason:		ContainerCreating
    Ready:		False
    Restart Count:	0
    Environment Variables:
      POSTGRESQL_USER:			userQv1
      POSTGRESQL_PASSWORD:		4kNgC6Gc
      POSTGRESQL_DATABASE:		root
      POSTGRESQL_MAX_CONNECTIONS:	
      POSTGRESQL_SHARED_BUFFERS:	
Conditions:
  Type		Status
  Ready 	False 
Volumes:
  eap-app-postgresql-pvol:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	eap-app-postgresql-claim
    ReadOnly:	false
  default-token-yfflc:
    Type:	Secret (a secret that should populate this volume)
    SecretName:	default-token-yfflc
Events:
  FirstSeen	LastSeen	Count	From					SubobjectPath	Reason		Message
  ─────────	────────	─────	────					─────────────	──────		───────
  3m		3m		1	{scheduler }						Scheduled	Successfully assigned eap-app-postgresql-2-95pi1 to ip-172-31-5-178.ec2.internal
  3m		6s		24	{kubelet ip-172-31-5-178.ec2.internal}			FailedMount	Unable to mount volumes for pod "eap-app-postgresql-2-95pi1_dyan": Cloud provider does not support volumes
  3m		6s		24	{kubelet ip-172-31-5-178.ec2.internal}			FailedSync	Error syncing pod, skipping: Cloud provider does not support volumes

Comment 1 Steve Speicher 2016-02-16 19:18:01 UTC
This turns out that the cluster PV wasn't properly configured.

Comment 2 Dongbo Yan 2016-02-22 02:50:25 UTC
The bug is still failed.
oc describe pod eap-app-postgresql-1-ok5z5
  2 Name:           eap-app-postgresql-1-ok5z5
  3 Namespace:      dyan
  4 Image(s):       registry.access.redhat.com/rhscl/postgresql-94-rhel7:latest
  5 Node:           ip-172-31-5-176.ec2.internal/172.31.5.176
  6 Start Time:     Mon, 22 Feb 2016 10:18:41 +0800
  7 Labels:         application=eap-app,deployment=eap-app-postgresql-1,deploymentConfig=eap-app-postgresql,deploymentconfig=eap-app-postgresql
  8 Status:         Pending
  9 Reason:         
 10 Message:        
 11 IP:             
 12 Controllers:    ReplicationController/eap-app-postgresql-1
 13 Containers:
 14   eap-app-postgresql:
 15     Container ID:       
 16     Image:              registry.access.redhat.com/rhscl/postgresql-94-rhel7:latest
 17     Image ID:           
 18     QoS Tier:
 19       memory:           BestEffort
 20       cpu:              BestEffort
 21     State:              Waiting
 22       Reason:           ContainerCreating
 23     Ready:              False
 24     Restart Count:      0
 25     Environment Variables:
 26       POSTGRESQL_USER:                  users8J
 27       POSTGRESQL_PASSWORD:              0EIdIWgI
 28       POSTGRESQL_DATABASE:              root
 29       POSTGRESQL_MAX_CONNECTIONS:       
 30       POSTGRESQL_SHARED_BUFFERS:        
 31 Conditions:
 32   Type          Status
 33   Ready         False 
 34 Volumes:
 35   eap-app-postgresql-pvol:
 36     Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
 37     ClaimName:  eap-app-postgresql-claim
 38     ReadOnly:   false
 39   default-token-3oyrd:
 40     Type:       Secret (a secret that should populate this volume)
 41     SecretName: default-token-3oyrd
 42 Events:
 43   FirstSeen     LastSeen        Count   From                                    SubobjectPath   Type            Reason          Message
 44   ---------     --------        -----   ----                                    -------------   --------        ------          -------
 45   20m           20m             1       {scheduler }                                                            Scheduled       Successfully assigned eap-app-postgresql-1-ok5z5 to ip-172-31-5-176.ec2.internal
 46   20m           9s              122     {kubelet ip-172-31-5-176.ec2.internal}                                  FailedMount     Unable to mount volumes for pod "eap-app-postgresql-1-ok5z5_dyan": unsupported volume type
 47   20m           8s              122     {kubelet ip-172-31-5-176.ec2.internal}                                  FailedSync      Error syncing pod, skipping: unsupported volume type
 48 
 49 ------------------add pvc info
 50 oc describe pvc eap-app-postgresql-claim
 51 Name:           eap-app-postgresql-claim
 52 Namespace:      dyan
 53 Status:         Bound
 54 Volume:         pv-1-stage-master-7b5c2-vol-c1577d17
 55 Labels:         application=eap-app,template=eap64-postgresql-persistent-s2i,xpaas=1.2.0
 56 Capacity:       1Gi
 57 Access Modes:   RWO

Comment 3 Jan Safranek 2016-03-15 08:12:55 UTC
It looks like misconfiguration again. Please provide definition of the pod, claims and persistent volumes, i.e.:

as the user:
# kubectl get pods -o yaml 
# kubectl get pvc -o yaml 

as admin:
# kubectl get pv -o yaml

Comment 4 Jan Safranek 2016-03-15 08:14:21 UTC
Oh, and please also attach your node-config.yaml and master-config.yaml, thanks!

Comment 6 Jan Safranek 2016-03-15 09:35:48 UTC
Thanks for pv and pvc.yaml in bug #1313560. It looks like correct configuration. 

However, I would be very interested how did you get a volume that fails with 
"Error syncing pod, skipping: Cloud provider does not support volumes", as reported in comment #0.

Can you still reproduce it? That must have been another volume, probably in different OpenShift installation, right?

Comment 7 Dongbo Yan 2016-03-15 10:11:09 UTC
I can't reproduce it now.
PV can be mounted normally and db pod status is always running.
When I check the project which I reproduce on before, it output : persistentvolume "pv-1-stage-master-7b5c2-vol-c1577d17" not found

Comment 8 Matt Woodson 2016-03-29 14:12:22 UTC
The PV pv-1-stage-master-7b5c2-vol-c1577d17 is no longer found in the output of "oc get pv".  The EBS volume (vol-c1577d17) does exist inside of the AWS account.

Something/Someone removed the pv from the stage cluster.  It is no longer found.  I believe that the error originally found in this bug is unrelated to the PV not being found.

Comment 9 Dongbo Yan 2016-03-30 01:55:41 UTC
QE verified this bug, PV can be mounted normally and db pod status is always running now.


Note You need to log in before you can comment on or make changes to this bug.