Bug 1395168
| Summary: | installer does not attach private key file to docker-registry when cloudfront is enabled. | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Johnny Liu <jialiu> |
| Component: | Installer | Assignee: | Steve Milner <smilner> |
| Status: | CLOSED ERRATA | QA Contact: | Johnny Liu <jialiu> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 3.4.0 | CC: | aos-bugs, jialiu, jokerman, mmccomas |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: |
Cause:
When cloudfront was enabled the installer did not use the private key for docker registry.
Consequence:
The docker registry is not deployed successfully.
Fix:
New steps were added to ensure the private key would create a secret and attach to the cloudfront docker registry.
Result:
cloudfront docker registry works as expected
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-04-12 18:48:24 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1427378 | ||
| Bug Blocks: | |||
|
Description
Johnny Liu
2016-11-15 10:33:50 UTC
PR at https://github.com/openshift/openshift-ansible/pull/3260. Waiting for some help to verify. PR merged. Re-test this bug with openshift-ansible-roles-3.5.5-1.git.0.3ae2138.el7.noarch, and FAIL.
Issue 1:
Job failed at the following task:
TASK [openshift_hosted : Copy cloudfront.pem to the registry] ******************
Wednesday 08 February 2017 10:32:10 +0000 (0:00:00.221) 0:17:56.960 ****
fatal: [ec2-54-89-85-0.compute-1.amazonaws.com]: FAILED! => {"changed": false, "checksum": "39849800f808f9c8aa5ee26e706f60868bd409c1", "failed": true, "msg": "Destination directory /etc/s3-cloudfront does not exist"}
"/etc/s3-cloudfront" dir should be created before copying.
Issue 2:
Even the above issue is fixed, go through the PR, still need more steps, the PR only upload files to remote host, but the cloudfront.pem file is not attached to docker-registry container. After upload the file to remote host, still need more steps, something like the following:
- set_fact:
remote_file_path: "/etc/origin/{{ openshift_hosted_registry_storage_s3_cloudfront_privatekeyfile | basename}}"
target_file_dirname: "{{ openshift_hosted_registry_storage_s3_cloudfront_privatekeyfile | dirname}}"
- name: upload private key file to master
copy: src=files/cloudfront.pem dest={{ remote_file_path }}
- name: Create registry secrets for cloudfront private key file
command: oc secrets new cloudfront {{ remote_file_path }}
register: result
failed_when: "'already exists' not in result.stderr and result.rc != 0"
- name: Add cloudfront secrets to registry deployment config
command: oc volume dc/docker-registry --add --name=cloudfront-vol -m {{ target_file_dirname }} --type=secret --secret-name=cloudfront
register: result
failed_when: "'already exists' not in result.stderr and result.rc != 0"
Thanks Johnny Liu! I'll try to get to this today. Updated PR at https://github.com/openshift/openshift-ansible/pull/3369. Requesting testing from Ryan Cook as well before merging. Johnny Liu: Updated PR based on Jason DeTiberus and your feedback. Please take a look. Add one comment about the PR: https://github.com/openshift/openshift-ansible/pull/3369/files#r101441171 https://github.com/openshift/openshift-ansible/pull/3369 was merged. Please test. Retest this bug with openshift-ansible master branch, and FAIL.
Setting the following line:
openshift_hosted_registry_storage_s3_cloudfront_privatekeyfile={{ lookup('env', 'WORKSPACE') }}/private-openshift-misc/v3-launch-templates/functionality-testing/aos-35/extra-ansible/files/cloudfront.pem
Installation failed as following:
<--snip-->
TASK [openshift_hosted : fail] *************************************************
Tuesday 21 February 2017 04:01:24 +0000 (0:00:00.636) 0:21:22.467 ******
skipping: [ec2-54-211-50-213.compute-1.amazonaws.com] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true}
TASK [openshift_hosted : fail] *************************************************
Tuesday 21 February 2017 04:01:24 +0000 (0:00:00.247) 0:21:22.714 ******
skipping: [ec2-54-211-50-213.compute-1.amazonaws.com] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true}
TASK [openshift_hosted : assert] ***********************************************
Tuesday 21 February 2017 04:01:25 +0000 (0:00:00.247) 0:21:22.962 ******
ok: [ec2-54-211-50-213.compute-1.amazonaws.com] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [openshift_hosted : Create registry secret for cloudfront] ****************
Tuesday 21 February 2017 04:01:25 +0000 (0:00:00.290) 0:21:23.252 ******
fatal: [ec2-54-211-50-213.compute-1.amazonaws.com]: FAILED! => {"changed": false, "failed": true, "msg": "argument contents is of type <type 'dict'> and we were unable to convert to list"}
If set openshift_hosted_registry_storage_s3_cloudfront_privatekeyfile="/tmp/cloudfront.pem", failed with the same error.
After talking with Tim it looks like the error is with the "contents" argument and not an arguments contents. We did some testing with the HEAD as of f0a32af0548eb309b9bb3bb2c366d35bdfab1847 and had success. The secret was created, volume attached, and registry started with the cloudfront configuration section. It's ready to be tested. Retest this bug with latest openshift-ansible git repo with master branch, PASS. latest changes merged and built so back to ON_QA Re-test this bug with openshift-ansible-3.5.15-1.git.0.8d2a456.el7.noarch, and FAIL.
TASK [openshift_hosted : Add cloudfront secret to the registry volumes] ********
Monday 27 February 2017 06:19:47 +0000 (0:00:01.731) 0:20:32.596 *******
fatal: [ec2-184-73-52-163.compute-1.amazonaws.com]: FAILED! => {
"failed": true
}
MSG:
the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 's3_volume_mount' is undefined
The error appears to have been in '/home/slave3/workspace/Launch Environment Flexy/private-openshift-ansible/roles/openshift_hosted/tasks/registry/storage/s3.yml': line 39, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Add cloudfront secret to the registry volumes
^ here
Looks like a bug was introduced in 7cf5cc14. Will update. PR merged. This bug's verification is blocked by BZ#1427378. Verified this bug with openshift-ansible-3.5.20-1.git.0.5a5fcd5.el7.noarch, and PASS.
1. Set the following lines in your inventory host file:
openshift_hosted_registry_storage_kind=object
openshift_hosted_registry_storage_provider=s3
openshift_hosted_registry_storage_s3_accesskey={{ lookup('env','AWS_ACCESS_KEY_ID') }}
openshift_hosted_registry_storage_s3_secretkey={{ lookup('env','AWS_SECRET_ACCESS_KEY') }}
openshift_hosted_registry_storage_s3_bucket=openshift-qe-registry-testing-bucket1
openshift_hosted_registry_storage_s3_region=us-east-1
openshift_hosted_registry_storage_s3_cloudfront_baseurl=https://xxx.cloudfront.net/
openshift_hosted_registry_storage_s3_cloudfront_privatekeyfile={{ lookup('env', 'WORKSPACE') }}/private-openshift-misc/v3-launch-templates/functionality-testing/aos-35/extra-ansible/files/cloudfront.pem
openshift_hosted_registry_storage_s3_cloudfront_keypairid=xxx
2. Trigger installation, installation is completed successfully.
3. Check:
# oc rsh docker-registry-1-vct81
sh-4.2$ cat /etc/registry/config.yml
<--snip-->
storage:
- name: cloudfront
options:
baseurl: https://xxx.cloudfront.net/
privatekey: /etc/origin/cloudfront.pem
keypairid: xxx
# oc get po docker-registry-1-vct81 -o yaml
<--snip-->
volumeMounts:
- mountPath: /etc/origin
name: cloudfront-vol
<--snip-->
- name: cloudfront-vol
secret:
defaultMode: 420
secretName: docker-registry-s3-cloudfront
Following the workaround of comment #6 and #7 mentioned in BZ#1427378, sti build and image push is working well.
volumes:
<--snip-->
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0903 |