Description of problem: According to https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_hosted/templates/registry_config.j2#L81, add the following lines into inventory host file: openshift_hosted_registry_storage_s3_cloudfront_baseurl=https://d2opkj68u8rx5e.cloudfront.net/ openshift_hosted_registry_storage_s3_cloudfront_privatekeyfile=/etc/s3-cloudfront/cloudfront.pem openshift_hosted_registry_storage_s3_cloudfront_keypairid=ABCDEFG After installation, docker-registry is not deployed successfully, due to it can not found "/etc/s3-cloudfront/cloudfront.pem" private key file, go through openshift-ansible code, did not found any place to attach this private key file into docker-registry container. Version-Release number of selected component (if applicable): openshift-ansible-3.4.25-1.git.0.eb2f314.el7.noarch How reproducible: Always Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
PR at https://github.com/openshift/openshift-ansible/pull/3260. Waiting for some help to verify.
PR merged.
Re-test this bug with openshift-ansible-roles-3.5.5-1.git.0.3ae2138.el7.noarch, and FAIL. Issue 1: Job failed at the following task: TASK [openshift_hosted : Copy cloudfront.pem to the registry] ****************** Wednesday 08 February 2017 10:32:10 +0000 (0:00:00.221) 0:17:56.960 **** fatal: [ec2-54-89-85-0.compute-1.amazonaws.com]: FAILED! => {"changed": false, "checksum": "39849800f808f9c8aa5ee26e706f60868bd409c1", "failed": true, "msg": "Destination directory /etc/s3-cloudfront does not exist"} "/etc/s3-cloudfront" dir should be created before copying. Issue 2: Even the above issue is fixed, go through the PR, still need more steps, the PR only upload files to remote host, but the cloudfront.pem file is not attached to docker-registry container. After upload the file to remote host, still need more steps, something like the following: - set_fact: remote_file_path: "/etc/origin/{{ openshift_hosted_registry_storage_s3_cloudfront_privatekeyfile | basename}}" target_file_dirname: "{{ openshift_hosted_registry_storage_s3_cloudfront_privatekeyfile | dirname}}" - name: upload private key file to master copy: src=files/cloudfront.pem dest={{ remote_file_path }} - name: Create registry secrets for cloudfront private key file command: oc secrets new cloudfront {{ remote_file_path }} register: result failed_when: "'already exists' not in result.stderr and result.rc != 0" - name: Add cloudfront secrets to registry deployment config command: oc volume dc/docker-registry --add --name=cloudfront-vol -m {{ target_file_dirname }} --type=secret --secret-name=cloudfront register: result failed_when: "'already exists' not in result.stderr and result.rc != 0"
Thanks Johnny Liu! I'll try to get to this today.
Updated PR at https://github.com/openshift/openshift-ansible/pull/3369. Requesting testing from Ryan Cook as well before merging.
Johnny Liu: Updated PR based on Jason DeTiberus and your feedback. Please take a look.
Add one comment about the PR: https://github.com/openshift/openshift-ansible/pull/3369/files#r101441171
https://github.com/openshift/openshift-ansible/pull/3369 was merged. Please test.
Retest this bug with openshift-ansible master branch, and FAIL. Setting the following line: openshift_hosted_registry_storage_s3_cloudfront_privatekeyfile={{ lookup('env', 'WORKSPACE') }}/private-openshift-misc/v3-launch-templates/functionality-testing/aos-35/extra-ansible/files/cloudfront.pem Installation failed as following: <--snip--> TASK [openshift_hosted : fail] ************************************************* Tuesday 21 February 2017 04:01:24 +0000 (0:00:00.636) 0:21:22.467 ****** skipping: [ec2-54-211-50-213.compute-1.amazonaws.com] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [openshift_hosted : fail] ************************************************* Tuesday 21 February 2017 04:01:24 +0000 (0:00:00.247) 0:21:22.714 ****** skipping: [ec2-54-211-50-213.compute-1.amazonaws.com] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true} TASK [openshift_hosted : assert] *********************************************** Tuesday 21 February 2017 04:01:25 +0000 (0:00:00.247) 0:21:22.962 ****** ok: [ec2-54-211-50-213.compute-1.amazonaws.com] => { "changed": false, "msg": "All assertions passed" } TASK [openshift_hosted : Create registry secret for cloudfront] **************** Tuesday 21 February 2017 04:01:25 +0000 (0:00:00.290) 0:21:23.252 ****** fatal: [ec2-54-211-50-213.compute-1.amazonaws.com]: FAILED! => {"changed": false, "failed": true, "msg": "argument contents is of type <type 'dict'> and we were unable to convert to list"} If set openshift_hosted_registry_storage_s3_cloudfront_privatekeyfile="/tmp/cloudfront.pem", failed with the same error.
After talking with Tim it looks like the error is with the "contents" argument and not an arguments contents.
We did some testing with the HEAD as of f0a32af0548eb309b9bb3bb2c366d35bdfab1847 and had success. The secret was created, volume attached, and registry started with the cloudfront configuration section. It's ready to be tested.
Retest this bug with latest openshift-ansible git repo with master branch, PASS.
latest changes merged and built so back to ON_QA
Re-test this bug with openshift-ansible-3.5.15-1.git.0.8d2a456.el7.noarch, and FAIL. TASK [openshift_hosted : Add cloudfront secret to the registry volumes] ******** Monday 27 February 2017 06:19:47 +0000 (0:00:01.731) 0:20:32.596 ******* fatal: [ec2-184-73-52-163.compute-1.amazonaws.com]: FAILED! => { "failed": true } MSG: the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 's3_volume_mount' is undefined The error appears to have been in '/home/slave3/workspace/Launch Environment Flexy/private-openshift-ansible/roles/openshift_hosted/tasks/registry/storage/s3.yml': line 39, column 5, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: - name: Add cloudfront secret to the registry volumes ^ here
Looks like a bug was introduced in 7cf5cc14. Will update.
Fix in https://github.com/openshift/openshift-ansible/pull/3504
This bug's verification is blocked by BZ#1427378.
Verified this bug with openshift-ansible-3.5.20-1.git.0.5a5fcd5.el7.noarch, and PASS. 1. Set the following lines in your inventory host file: openshift_hosted_registry_storage_kind=object openshift_hosted_registry_storage_provider=s3 openshift_hosted_registry_storage_s3_accesskey={{ lookup('env','AWS_ACCESS_KEY_ID') }} openshift_hosted_registry_storage_s3_secretkey={{ lookup('env','AWS_SECRET_ACCESS_KEY') }} openshift_hosted_registry_storage_s3_bucket=openshift-qe-registry-testing-bucket1 openshift_hosted_registry_storage_s3_region=us-east-1 openshift_hosted_registry_storage_s3_cloudfront_baseurl=https://xxx.cloudfront.net/ openshift_hosted_registry_storage_s3_cloudfront_privatekeyfile={{ lookup('env', 'WORKSPACE') }}/private-openshift-misc/v3-launch-templates/functionality-testing/aos-35/extra-ansible/files/cloudfront.pem openshift_hosted_registry_storage_s3_cloudfront_keypairid=xxx 2. Trigger installation, installation is completed successfully. 3. Check: # oc rsh docker-registry-1-vct81 sh-4.2$ cat /etc/registry/config.yml <--snip--> storage: - name: cloudfront options: baseurl: https://xxx.cloudfront.net/ privatekey: /etc/origin/cloudfront.pem keypairid: xxx # oc get po docker-registry-1-vct81 -o yaml <--snip--> volumeMounts: - mountPath: /etc/origin name: cloudfront-vol <--snip--> - name: cloudfront-vol secret: defaultMode: 420 secretName: docker-registry-s3-cloudfront Following the workaround of comment #6 and #7 mentioned in BZ#1427378, sti build and image push is working well. volumes: <--snip-->
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0903