Description of problem: Created a jenkins persistent application from the console and it fails to start. Saw this in the events: FailedMount {kubelet ip-172-31-14-20.ec2.internal} Unable to mount volumes for pod "jenkins-1-ovpb7_firsttest(72c4bf5b-1405-11e6-8ba7-0a1d348c34bb)": Could not attach EBS Disk "placeholder-for-provisioning": Invalid format for AWS volume (aws:///placeholder-for-provisioning) Get events gives the following 2m 2m 1 jenkins DeploymentConfig FailedUpdate {deployment-controller } Cannot update deployment firsttest/jenkins-1 status to Pending: replicationcontrollers "jenkins-1" cannot be updated: the object has been modified; please apply your changes to the latest version and try again 2m 2m 1 jenkins-1-deploy Pod Scheduled {default-scheduler } Successfully assigned jenkins-1-deploy to ip-172-31-14-20.ec2.internal 2m 2m 1 jenkins DeploymentConfig DeploymentCreated {deploymentconfig-controller } Created new deployment "jenkins-1" for version 1 2m 2m 1 jenkins-1-deploy Pod spec.containers{deployment} Pulled {kubelet ip-172-31-14-20.ec2.internal} Container image "openshift3/ose-deployer:v3.2.0.40" already present on machine 2m 2m 1 jenkins-1-deploy Pod spec.containers{deployment} Created {kubelet ip-172-31-14-20.ec2.internal} Created container with docker id eddba8b89689 2m 2m 1 jenkins-1-deploy Pod spec.containers{deployment} Started {kubelet ip-172-31-14-20.ec2.internal} Started container with docker id eddba8b89689 2m 2m 1 jenkins-1 ReplicationController SuccessfulCreate {replication-controller } Created pod: jenkins-1-ovpb7 2m 2m 1 jenkins-1-ovpb7 Pod FailedScheduling {default-scheduler } PersistentVolumeClaim is not bound: "jenkins" 2m 2m 1 jenkins-1-ovpb7 Pod Scheduled {default-scheduler } Successfully assigned jenkins-1-ovpb7 to ip-172-31-14-20.ec2.internal 1m 1m 1 jenkins-1-ovpb7 Pod FailedMount {kubelet ip-172-31-14-20.ec2.internal} Unable to mount volumes for pod "jenkins-1-ovpb7_firsttest(72c4bf5b-1405-11e6-8ba7-0a1d348c34bb)": Could not attach EBS Disk "placeholder-for-provisioning": Invalid format for AWS volume (aws:///placeholder-for-provisioning) 1m 1m 1 jenkins-1-ovpb7 Pod FailedSync {kubelet ip-172-31-14-20.ec2.internal} Error syncing pod, skipping: Could not attach EBS Disk "placeholder-for-provisioning": Invalid format for AWS volume (aws:///placeholder-for-provisioning) 1m 1m 1 jenkins-1-ovpb7 Pod spec.containers{jenkins} Created {kubelet ip-172-31-14-20.ec2.internal} Created container with docker id 59cfeb3c3778 1m 1m 1 jenkins-1-ovpb7 Pod spec.containers{jenkins} Started {kubelet ip-172-31-14-20.ec2.internal} Started container with docker id 59cfeb3c3778 44s 44s 1 jenkins-1-ovpb7 Pod spec.containers{jenkins} Started {kubelet ip-172-31-14-20.ec2.internal} Started container with docker id 7f457d7ff28e 44s 44s 1 jenkins-1-ovpb7 Pod spec.containers{jenkins} Created {kubelet ip-172-31-14-20.ec2.internal} Created container with docker id 7f457d7ff28e 52s 32s 2 jenkins-1-ovpb7 Pod spec.containers{jenkins} Unhealthy {kubelet ip-172-31-14-20.ec2.internal} Readiness probe failed: HTTP probe failed with statuscode: 503 29s 24s 2 jenkins-1-ovpb7 Pod spec.containers{jenkins} BackOff {kubelet ip-172-31-14-20.ec2.internal} Back-off restarting failed docker container 29s 24s 2 jenkins-1-ovpb7 Pod FailedSync {kubelet ip-172-31-14-20.ec2.internal} Error syncing pod, skipping: failed to "StartContainer" for "jenkins" with CrashLoopBackOff: "Back-off 10s restarting failed container=jenkins pod=jenkins-1-ovpb7_firsttest(72c4bf5b-1405-11e6-8ba7-0a1d348c34bb)" Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
There were several issues affecting this template, but the main one you were probably hitting was related to the health checks. Re-assigning to devexp team.
well we bumped the liveness interval which is the only known issue w/ the template, but I think the storage team ought to take a look at the AWS error messages, not sure if those are "normal"
This issue looks like a duplication of bug #1333087 https://bugzilla.redhat.com/show_bug.cgi?id=1333087
Agreed. The "Placeholder-for-provisioning" VolumeID has been fixed in the linked BZ issue.
Taking this issue from Brad. It's an Online issue, not Storage.
Mark agreed to close as duplicate. Thanks. *** This bug has been marked as a duplicate of bug 1333087 ***