Description of problem: mysql-apb update from 5.6 to 5.7 failed Version-Release number of selected component (if applicable): asb: 1.2.12 mysql-apb: v3.10.0-0.32.0.0 How reproducible: always Steps to Reproduce: 1. provision mysql 5.6 2. create data in mysql pod 3. upgrade to 5.7 in backend or web console Actual results: upgrade to 5.7 failed. sandbox log: # oc logs -f apb-ea16373c-6662-4162-aac0-3448b2d38c34 PLAY [mysql-apb playbook to provision the application] ************************* TASK [ansible.kubernetes-modules : Install latest openshift client] ************ skipping: [localhost] TASK [ansibleplaybookbundle.asb-modules : debug] ******************************* skipping: [localhost] TASK [rhscl-mysql-apb-openshift : Find pod we need to update] ****************** changed: [localhost] TASK [rhscl-mysql-apb-openshift : Find dc we will clean up] ******************** changed: [localhost] TASK [rhscl-mysql-apb-openshift : Prepare for downgrade] *********************** skipping: [localhost] TASK [rhscl-mysql-apb-openshift : Create db backup directory] ****************** changed: [localhost] TASK [rhscl-mysql-apb-openshift : Backup source database] ********************** changed: [localhost] TASK [rhscl-mysql-apb-openshift : rsync db backup to apb] ********************** changed: [localhost] TASK [rhscl-mysql-apb-openshift : Set mysql service state to present] ********** changed: [localhost] TASK [rhscl-mysql-apb-openshift : include_tasks] ******************************* included: /opt/ansible/roles/rhscl-mysql-apb-openshift/tasks/dev.yml for localhost TASK [rhscl-mysql-apb-openshift : set MySQL deployment with ephemeral storage to present] *** changed: [localhost] TASK [rhscl-mysql-apb-openshift : include_tasks] ******************************* skipping: [localhost] TASK [rhscl-mysql-apb-openshift : Wait for mysql to come up] ******************* ok: [localhost] TASK [rhscl-mysql-apb-openshift : Find pod we need to restore] ***************** changed: [localhost] TASK [rhscl-mysql-apb-openshift : rsync db backup to new pod] ****************** changed: [localhost] TASK [rhscl-mysql-apb-openshift : Restore database] **************************** changed: [localhost] TASK [rhscl-mysql-apb-openshift : Run mysql_upgrade] *************************** fatal: [localhost]: FAILED! => {"changed": true, "cmd": "oc exec -it -n 1oz3n mysql-5.7-dev-1-mk9jq -- /bin/bash -c \"mysql_upgrade -u root\"", "delta": "0:00:00.776787", "end": "2018-05-23 08:58:46.823383", "msg": "non-zero return code", "rc": 2, "start": "2018-05-23 08:58:46.046596", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\ncommand terminated with exit code 2", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "command terminated with exit code 2"], "stdout": "Checking if update is needed.\nThis installation of MySQL is already upgraded to 5.7.21, use --force if you still need to run mysql_upgrade", "stdout_lines": ["Checking if update is needed.", "This installation of MySQL is already upgraded to 5.7.21, use --force if you still need to run mysql_upgrade"]} PLAY RECAP ********************************************************************* localhost : ok=12 changed=10 unreachable=0 failed=1 the old pod is not deleted. but data has been moved to new pod # oc get pod NAME READY STATUS RESTARTS AGE mysql-5.6-prod-1-qvpt5 1/1 Running 0 4m mysql-5.7-dev-1-mk9jq 1/1 Running 0 3m the serviceinstance status is also not right, it not marked as fail. # oc describe serviceinstance Name: dh-mysql-apb Namespace: 1oz3n Labels: app=serviceinstance-template Annotations: openshift.io/generated-by=OpenShiftNewApp API Version: servicecatalog.k8s.io/v1beta1 Kind: ServiceInstance Metadata: Creation Timestamp: 2018-05-23T08:56:30Z Finalizers: kubernetes-incubator/service-catalog Generation: 2 Resource Version: 60670 Self Link: /apis/servicecatalog.k8s.io/v1beta1/namespaces/1oz3n/serviceinstances/dh-mysql-apb UID: 2fb08d52-5e67-11e8-999e-0a580a800004 Spec: Cluster Service Class External Name: dh-mysql-apb Cluster Service Class Ref: Name: ddd528762894b277001df310a126d5ad Cluster Service Plan External Name: dev Cluster Service Plan Ref: Name: 583f053f9ba165125a16cf9aff768017 External ID: 2fb08cb8-5e67-11e8-999e-0a580a800004 Parameters From: Secret Key Ref: Key: parameters Name: dh-mysql-apb-parameters-new Update Requests: 1 User Info: Extra: Scopes . Authorization . Openshift . Io: user:full Groups: system:authenticated:oauth system:authenticated UID: Username: zitang2 Status: Async Op In Progress: false Conditions: Last Transition Time: 2018-05-23T08:59:14Z Message: The instance was updated successfully Reason: InstanceUpdatedSuccessfully Status: True Type: Ready Deprovision Status: Required External Properties: Cluster Service Plan External ID: 583f053f9ba165125a16cf9aff768017 Cluster Service Plan External Name: dev Parameter Checksum: 9accfdc3031fff2ef5faa809546274db3e8ecc7d5f390d7fdc10089a78c4edbe Parameters: Mysql _ Database: <redacted> Mysql _ Password: <redacted> Mysql _ User: <redacted> Mysql _ Version: <redacted> User Info: Extra: Scopes . Authorization . Openshift . Io: user:full Groups: system:authenticated:oauth system:authenticated UID: Username: zitang2 Observed Generation: 2 Orphan Mitigation In Progress: false Provision Status: Provisioned Reconciled Generation: 2 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ErrorWithParameters 6m (x10 over 6m) service-catalog-controller-manager failed to prepare parameters nil: secrets "dh-mysql-apb-parameters" not found Normal Provisioning 6m service-catalog-controller-manager The instance is being provisioned asynchronously Normal Provisioning 6m (x5 over 6m) service-catalog-controller-manager The instance is being provisioned asynchronously (action started) Normal ProvisionedSuccessfully 5m service-catalog-controller-manager The instance was provisioned successfully Normal UpdatingInstance 5m service-catalog-controller-manager The instance is being updated asynchronously Normal UpdatingInstance 4m (x5 over 5m) service-catalog-controller-manager The instance is being updated asynchronously (action started) Warning UpdateInstanceCallFailed 4m service-catalog-controller-manager Update call failed: Error occurred during update. Please contact administrator if the issue persists. Normal InstanceUpdatedSuccessfully 4m service-catalog-controller-manager The instance was updated successfully Expected results: update to 5.7 succeed Additional info: other apb update succeed, and mysql 5.7 to 5.6 succeed.
Looks like there is a disparity between the latest in brew and what has been pushed to the stage registry. dzager.optiplex ➜ ~ docker inspect brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/mysql-apb:v3.10.0 --format "{{ index .Config.Labels.release }}" 0.50.0.0 dzager.optiplex ➜ ~ docker inspect registry.access.stage.redhat.com/openshift3/mediawiki-apb:v3.10.0 --format "{{ index .Config.Labels.release }}" 0.32.0.2 Moving this to modified. I'll manually build images and update the errata before moving back to ON_QA.
I use the image mysql-apb : v3.10.0-0.51.0.0 in brew for pre-test. And find that the pod name is changed to : # oc get pod NAME READY STATUS RESTARTS AGE mysql-f40c25fa-5f1e-11e8-9f13-0a580a800005-1-rk8w4 1/1 Running 0 3m in previous version , the pod is named like: mysql-<plan>-<version>-**** this is easy to identify.
We made this change so that multiple mysql-apb could be provisioned (managed) in the same namespace. This was an intentional decision. Unfortunately, the only way to support that was by using the service instance id provided by the catalog which makes it harder to read. We are considering alternatives, but they will not be pursued in this release (3.10).
https://errata.devel.redhat.com/advisory/33505 moved to QE openshift-enterprise-asb-container-v3.10.0-0.51.0.1 openshift-enterprise-mediawiki-apb-v3.10.0-0.51.0.1 openshift-enterprise-postgresql-apb-v3.10.0-0.51.0.1 openshift-enterprise-mysql-apb-v3.10.0-0.51.0.1 openshift-enterprise-mariadb-apb-v3.10.0-0.51.0.1 openshift-enterprise-apb-tools-v3.10.0-0.32.0.2
using mysql-apb-v3.10.0-0.51.0.1 to verify, upgrade from 5.6 to 5.7 still failed. TASK [mysql-apb : Restore database] ******************************************** changed: [localhost] TASK [mysql-apb : Run mysql_upgrade] ******************************************* fatal: [localhost]: FAILED! => {"changed": true, "cmd": "oc exec -it -n mysql mysql-15ba54e7-6228-11e8-be35-0a580a800004-1-fvhr8 -- /bin/bash -c \"mysql_upgrade -u root\"", "delta": "0:00:00.740373", "end": "2018-05-28 03:39:41.074313", "msg": "non-zero return code", "rc": 2, "start": "2018-05-28 03:39:40.333940", "stderr": "Unable to use a TTY - input is not a terminal or the right kind of file\ncommand terminated with exit code 2", "stderr_lines": ["Unable to use a TTY - input is not a terminal or the right kind of file", "command terminated with exit code 2"], "stdout": "Checking if update is needed.\nThis installation of MySQL is already upgraded to 5.7.21, use --force if you still need to run mysql_upgrade", "stdout_lines": ["Checking if update is needed.", "This installation of MySQL is already upgraded to 5.7.21, use --force if you still need to run mysql_upgrade"]} PLAY RECAP ********************************************************************* localhost : ok=12 changed=10 unreachable=0 failed=1 in v3.9.30, using apb in access registry, the upgrade also failed with the same error. when v3.9 release ,the mysql apb dependency is 5.7.20 (bug https://bugzilla.redhat.com/show_bug.cgi?id=1544606#c14 logs the version), so I thinks it is caused by the mysql dependency upgrade : Dependencies: registry.access.redhat.com/rhscl/mysql-56-rhel7 registry.access.redhat.com/rhscl/mysql-57-rhel7 # docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/rhscl/mysql-57-rhel7 latest e17704e3886f 2 weeks ago 429 MB
Is it possible that this is the same failure as https://bugzilla.redhat.com/show_bug.cgi?id=1570603. My hypothesis is that multiple update jobs are started, one completes successfully, and the rest fail because it was already updated.
I don't perform many update at the same time. other apb update succeed. And in v3.9 , the mysql update from 5.6 to 5.7 still failed at the same error.
We can add --force to try and work around https://bugzilla.redhat.com/show_bug.cgi?id=1570603 mysql_upgrade -u root --force && echo $? is returning 0 for me and rerunning mysql_upgrade should be harmless.
This is a change in behavior for mysql_upgrade in 5.7.21. I did an upgrade by manually modifying the APB to use 5.7-6 for a 5.7 upgrade (mysql 5.7.20) and it did not do this. Also 5.7.21 works just fine if we add --force. https://github.com/ansibleplaybookbundle/mysql-apb/pull/34 Zihan, can you please open a BZ against 3.9 so we can get this fixed there as well. The fix will pretty much be the same.
(In reply to Jason Montleon from comment #9) > This is a change in behavior for mysql_upgrade in 5.7.21. I did an upgrade > by manually modifying the APB to use 5.7-6 for a 5.7 upgrade (mysql 5.7.20) > and it did not do this. > > Also 5.7.21 works just fine if we add --force. > > https://github.com/ansibleplaybookbundle/mysql-apb/pull/34 > > Zihan, can you please open a BZ against 3.9 so we can get this fixed there > as well. The fix will pretty much be the same. I open this bug to trace this against 3.9 https://bugzilla.redhat.com/show_bug.cgi?id=1583895
verified version: mysql-apb-v3.10.0-0.54.0.1