Bug 1421386 - error adding MySQL module [NEEDINFO]
Summary: error adding MySQL module
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: OpenShift Online
Classification: Red Hat
Component: Image
Version: 3.x
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Ben Parees
QA Contact: Dongbo Yan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-11 22:27 UTC by dbb2000
Modified: 2017-02-17 19:13 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-02-17 19:13:49 UTC
Target Upstream Version:
bparees: needinfo? (dbb2000)


Attachments (Terms of Use)
console print screen showing the error (94.66 KB, image/png)
2017-02-11 22:27 UTC, dbb2000
no flags Details

Description dbb2000 2017-02-11 22:27:24 UTC
Created attachment 1249405 [details]
console print screen showing the error

I'm trying to add MySQL module to my project, but during the deployment process, it has failed. 

The message log is the following:

--> Scaling mysqldb-1 to 1
--> Waiting up to 10m0s for pods in deployment mysqldb-1 to become ready
W0211 22:18:31.284147       1 reflector.go:330] github.com/openshift/origin/pkg/deploy/strategy/support/lifecycle.go:468: watch of *api.Pod ended with: too old resource version: 858271761 (858276371)
error: update acceptor rejected mysqldb-1: pods for deployment "mysqldb-1" took longer than 600 seconds to become ready


Any help will be appreciated.

Comment 1 Ben Parees 2017-02-13 13:55:15 UTC
Can you provide logs from the crashing container/pod?  (different from the deployment log you provided)

from the cli you can run "oc logs <podname> -p" to get them, or you should be able to get them from the web console.

Comment 2 dbb2000 2017-02-13 15:06:20 UTC
I tried what you said to me, but I can't retrieved any log

I tried:

:~$ oc logs mysql-1-i9it1 -p
Error from server: previous terminated container "mysql" in pod "mysql-1-i9it1" not found

and then:
:~$ oc logs mysql-1-deploy -p
Error from server: previous terminated container "deployment" in pod "mysql-1-deploy" not found


But I found this on "Events" tab: 

Error syncing pod, skipping: error killing pod: failed to "TeardownNetwork" for "mysql-1-deploy_lingerie" with TeardownNetworkError: "Failed to teardown network for pod \"ef8616d6-f1fa-11e6-b599-0e63b9c1c48f\" using network plugins \"cni\": CNI request failed with status 400: 'Failed to execute iptables-restore: exit status 1 (iptables-restore: line 3 failed\n)\n'"

Comment 3 Ben Parees 2017-02-13 15:22:38 UTC
Can you share the deployment config yaml for your pod (oc get dc mysql -o yaml)  How did you create this mysql pod?

Comment 4 dbb2000 2017-02-13 15:27:51 UTC
I created using the web console, clicking on "add to project", "Data Stores", "MySQL (Persistent)". 


Here it goes  the yaml file:

:~$ oc get dc mysql -o yaml
apiVersion: v1
kind: DeploymentConfig
metadata:
  creationTimestamp: 2017-02-13T14:44:34Z
  generation: 3
  labels:
    app: mysql-persistent
    template: mysql-persistent-template
  name: mysql
  namespace: lingerie
  resourceVersion: "865703920"
  selfLink: /oapi/v1/namespaces/lingerie/deploymentconfigs/mysql
  uid: ef7bcec9-f1fa-11e6-8361-0ebeb1070c7f
spec:
  replicas: 1
  selector:
    name: mysql
  strategy:
    recreateParams:
      timeoutSeconds: 600
    resources: {}
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        name: mysql
    spec:
      containers:
      - env:
        - name: MYSQL_USER
          value: davi
        - name: MYSQL_PASSWORD
          value: davi1980
        - name: MYSQL_DATABASE
          value: vendas
        image: registry.access.redhat.com/rhscl/mysql-56-rhel7@sha256:34afcd94138a80c8624a113883e8a88c06168a89880ff262ed4455e846c73b1d
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: 3306
          timeoutSeconds: 1
        name: mysql
        ports:
        - containerPort: 3306
          protocol: TCP
        readinessProbe:
          exec:
            command:
            - /bin/sh
            - -i
            - -c
            - MYSQL_PWD="$MYSQL_PASSWORD" mysql -h 127.0.0.1 -u $MYSQL_USER -D $MYSQL_DATABASE
              -e 'SELECT 1'
          failureThreshold: 3
          initialDelaySeconds: 5
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          limits:
            memory: 512Mi
        terminationMessagePath: /dev/termination-log
        volumeMounts:
        - mountPath: /var/lib/mysql/data
          name: mysql-data
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: mysql-data
        persistentVolumeClaim:
          claimName: mysql
  test: false
  triggers:
  - imageChangeParams:
      automatic: true
      containerNames:
      - mysql
      from:
        kind: ImageStreamTag
        name: mysql:5.6
        namespace: openshift
      lastTriggeredImage: registry.access.redhat.com/rhscl/mysql-56-rhel7@sha256:34afcd94138a80c8624a113883e8a88c06168a89880ff262ed4455e846c73b1d
    type: ImageChange
  - type: ConfigChange
status:
  conditions:
  - lastTransitionTime: 2017-02-13T14:44:34Z
    message: Deployment config does not have minimum availability.
    status: "False"
    type: Available
  - lastTransitionTime: 2017-02-13T15:18:14Z
    message: Replication controller "mysql-2" has failed progressing
    reason: ProgressDeadlineExceeded
    status: "False"
    type: Progressing
  details:
    causes:
    - imageTrigger:
        from:
          kind: ImageStreamTag
          name: mysql:5.6
          namespace: openshift
      type: ImageChange
    message: image change
  latestVersion: 2
  observedGeneration: 3

Comment 5 Ben Parees 2017-02-13 16:09:03 UTC
hm.  How about "oc describe pod mysql-1-i9it1"?

and "oc logs mysql-1-i9it1"  (no -p)

Comment 6 dbb2000 2017-02-13 16:41:03 UTC
apparently, "mysql-1-i9it1 was" was a temporary process and is no longer shown on my console.

I did a second deployment attempt, here it goes as you asked:

:~$ oc describe pod mysql-2-deploy
Name:			mysql-2-deploy
Namespace:		lingerie
Security Policy:	restricted
Node:			ip-172-31-2-79.ec2.internal/172.31.2.79
Start Time:		Mon, 13 Feb 2017 13:07:55 -0200
Labels:			openshift.io/deployer-pod-for.name=mysql-2
Status:			Failed
IP:			10.1.93.176
Controllers:		<none>
Containers:
  deployment:
    Container ID:	docker://93c8ffe6fc290930edc9a6e753e62919481f0a88dbd02e04faf4d5767c9b0033
    Image:		registry.ops.openshift.com/openshift3/ose-deployer:v3.4.1.2
    Image ID:		docker-pullable://registry.ops.openshift.com/openshift3/ose-deployer@sha256:37adf782e29f09c815ae0bd91299e99ae84e2849b25de100c6581df36c6a7920
    Port:		
    Limits:
      cpu:	1
      memory:	512Mi
    Requests:
      cpu:		60m
      memory:		307Mi
    State:		Terminated
      Reason:		Error
      Exit Code:	1
      Started:		Mon, 13 Feb 2017 13:08:11 -0200
      Finished:		Mon, 13 Feb 2017 13:18:13 -0200
    Ready:		False
    Restart Count:	0
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from deployer-token-bw61e (ro)
    Environment Variables:
      KUBERNETES_MASTER:	https://ip-172-31-10-24.ec2.internal
      OPENSHIFT_MASTER:		https://ip-172-31-10-24.ec2.internal
      BEARER_TOKEN_FILE:	/var/run/secrets/kubernetes.io/serviceaccount/token
      OPENSHIFT_CA_DATA:	-----BEGIN CERTIFICATE-----
MIIC5jCCAdCgAwIBAgIBATALBgkqhkiG9w0BAQswJjEkMCIGA1UEAwwbb3BlbnNo
aWZ0LXNpZ25lckAxNDYzMTU2NTg2MB4XDTE2MDUxMzE2MjMwNloXDTIxMDUxMjE2
MjMwN1owJjEkMCIGA1UEAwwbb3BlbnNoaWZ0LXNpZ25lckAxNDYzMTU2NTg2MIIB
IjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArp4BlumhbaZiJxnPJPd78jqp
scHOa71PnC8Pd/Uzg/cr6kCz8cqFadVpHyAYxR2MVPzwGEjJ2ScP2f5iVby8w10n
408WfAv3HelPCcw5z1yp4pb2WnFNy1eglGl2fQp7Z/Od8TgO2OOpeVvLfxSL/K9V
OXYmt9HFnfhO/0c5Cv5T7OJc997h3++006yi/qt0lGTHgeF/eUCmnZ0tosjCRhAS
7AJrYAXN8ERI3s91mrzDMC4q3FjOLlWVa9ZrXeUrbvJYCYgbdtgG2wup2ETy2nFJ
6meeYRYF/7JaVXsOZWkJYfH2K6Lg1wGjFyOXNZkA2jLqOlRMUZWHNnA/DTpL3wID
AQABoyMwITAOBgNVHQ8BAf8EBAMCAKQwDwYDVR0TAQH/BAUwAwEB/zALBgkqhkiG
9w0BAQsDggEBADQPZ3eyz2OtWdsxzG//lq1DXguV7T5KUfgp76mkZuDjp5ermC42
m1DjFtEP8HvFTZgz+LYsAIhv7MShe/bZOieHnz4A/vc3oFi6uVrcLffR+CVjdlSP
UDKZzOkf7/jTxOzSQImNk3AQAuIeVCcMXF4v4zVRlyMaWcTtOuNGWdEmLZUhUrjT
E5Gh+KQOW1jFDYKeZ1RGkAMCL8aD6p7jNvmxVGzQasIleKylDteGblcEdn8M3Xjp
hHUVIWnru5CBTwCxCqSXkxMFUsZqSIy+hiMeJPFmkDIdSBb7n2BwgcG0cXu/Zuju
2PKZGzVqvgHhcIlwFZ2g9g1S/SwlVEGUvZs=
-----END CERTIFICATE-----

      OPENSHIFT_DEPLOYMENT_NAME:	mysql-2
      OPENSHIFT_DEPLOYMENT_NAMESPACE:	lingerie
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	False 
  PodScheduled 	True 
Volumes:
  deployer-token-bw61e:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	deployer-token-bw61e
QoS Class:	Burstable
Tolerations:	<none>
Events:
  FirstSeen	LastSeen	Count	From					SubobjectPath			Type		Reason		Message
  ---------	--------	-----	----					-------------			--------	------		-------
  1h		1h		1	{default-scheduler }							Normal		Scheduled	Successfully assigned mysql-2-deploy to ip-172-31-2-79.ec2.internal
  1h		1h		1	{kubelet ip-172-31-2-79.ec2.internal}	spec.containers{deployment}	Normal		Pulling		pulling image "registry.ops.openshift.com/openshift3/ose-deployer:v3.4.1.2"
  1h		1h		1	{kubelet ip-172-31-2-79.ec2.internal}	spec.containers{deployment}	Normal		Pulled		Successfully pulled image "registry.ops.openshift.com/openshift3/ose-deployer:v3.4.1.2"
  1h		1h		1	{kubelet ip-172-31-2-79.ec2.internal}	spec.containers{deployment}	Normal		Created		Created container with docker id 93c8ffe6fc29; Security:[seccomp=unconfined]
  1h		1h		1	{kubelet ip-172-31-2-79.ec2.internal}	spec.containers{deployment}	Normal		Started		Started container with docker id 93c8ffe6fc29






:~$ oc logs mysql-2-deploy
--> Scaling mysql-2 to 1
--> Waiting up to 10m0s for pods in deployment mysql-2 to become ready
W0213 15:13:16.306491       1 reflector.go:330] github.com/openshift/origin/pkg/deploy/strategy/support/lifecycle.go:468: watch of *api.Pod ended with: too old resource version: 865673399 (865686864)
error: update acceptor rejected mysql-2: pods for deployment "mysql-2" took longer than 600 seconds to become ready

Comment 7 Ben Parees 2017-02-13 18:04:54 UTC
Hm, it seems like perhaps things are getting torn down after failing.  While the deployment is still going on and after the mysql pod is created, try to do the "oc describe" and "oc logs -p" on it at that point.  (again, on the mysql pod, not the mysql deployment pod)

Comment 8 Michal Fojtik 2017-02-14 09:45:38 UTC
Ben, I have seem this:

--> Waiting up to 10m0s for pods in deployment mysql-2 to become ready
W0213 15:13:16.306491       1 reflector.go:330] github.com/openshift/origin/pkg/deploy/strategy/support/lifecycle.go:468: watch of *api.Pod ended with: too old resource version: 865673399 (865686864)
error: update acceptor rejected mysql-2: pods for deployment "mysql-2" took longer than 600 seconds to become ready

several times during last week, it smells like something got broken when reading from the logs endpoint. For deployment logs I added a logic [1] that will retry getting logs using the latest revisionVersion.

[1] https://github.com/openshift/origin/pull/12910

Comment 9 Ben Parees 2017-02-16 18:33:13 UTC
dbb2000, are you able to try/gather the data from my comment 7?


Note You need to log in before you can comment on or make changes to this bug.