Bug 1395168 - installer does not attach private key file to docker-registry when cloudfront is enabled.
Summary: installer does not attach private key file to docker-registry when cloudfront...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.4.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Steve Milner
QA Contact: Johnny Liu
URL:
Whiteboard:
Depends On: 1427378
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-15 10:33 UTC by Johnny Liu
Modified: 2017-07-24 14:11 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: When cloudfront was enabled the installer did not use the private key for docker registry. Consequence: The docker registry is not deployed successfully. Fix: New steps were added to ensure the private key would create a secret and attach to the cloudfront docker registry. Result: cloudfront docker registry works as expected
Clone Of:
Environment:
Last Closed: 2017-04-12 18:48:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:0903 0 normal SHIPPED_LIVE OpenShift Container Platform atomic-openshift-utils bug fix and enhancement 2017-04-12 22:45:42 UTC

Description Johnny Liu 2016-11-15 10:33:50 UTC
Description of problem:
According to https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_hosted/templates/registry_config.j2#L81, add the following lines into inventory host file:
openshift_hosted_registry_storage_s3_cloudfront_baseurl=https://d2opkj68u8rx5e.cloudfront.net/
openshift_hosted_registry_storage_s3_cloudfront_privatekeyfile=/etc/s3-cloudfront/cloudfront.pem
openshift_hosted_registry_storage_s3_cloudfront_keypairid=ABCDEFG


After installation, docker-registry is not deployed successfully, due to it can not found "/etc/s3-cloudfront/cloudfront.pem" private key file, go through openshift-ansible code, did not found any place to attach this private key file into docker-registry container.

Version-Release number of selected component (if applicable):
openshift-ansible-3.4.25-1.git.0.eb2f314.el7.noarch

How reproducible:
Always

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Steve Milner 2017-02-06 21:48:28 UTC
PR at https://github.com/openshift/openshift-ansible/pull/3260. Waiting for some help to verify.

Comment 3 Steve Milner 2017-02-07 15:13:37 UTC
PR merged.

Comment 5 Johnny Liu 2017-02-08 10:56:33 UTC
Re-test this bug with openshift-ansible-roles-3.5.5-1.git.0.3ae2138.el7.noarch, and FAIL.


Issue 1:
Job failed at the following task:
TASK [openshift_hosted : Copy cloudfront.pem to the registry] ******************
Wednesday 08 February 2017  10:32:10 +0000 (0:00:00.221)       0:17:56.960 **** 

fatal: [ec2-54-89-85-0.compute-1.amazonaws.com]: FAILED! => {"changed": false, "checksum": "39849800f808f9c8aa5ee26e706f60868bd409c1", "failed": true, "msg": "Destination directory /etc/s3-cloudfront does not exist"}


"/etc/s3-cloudfront" dir should be created before copying.

Issue 2:
Even the above issue is fixed, go through the PR, still need more steps, the PR only upload files to remote host, but the cloudfront.pem file is not attached to docker-registry container. After upload the file to remote host, still need more steps, something like the following:
- set_fact:
    remote_file_path: "/etc/origin/{{ openshift_hosted_registry_storage_s3_cloudfront_privatekeyfile | basename}}"
    target_file_dirname: "{{ openshift_hosted_registry_storage_s3_cloudfront_privatekeyfile | dirname}}"

- name: upload private key file to master
  copy: src=files/cloudfront.pem dest={{ remote_file_path }}

- name: Create registry secrets for cloudfront private key file
  command: oc secrets new cloudfront {{ remote_file_path }}
  register: result
  failed_when: "'already exists' not in result.stderr and result.rc != 0"

- name: Add cloudfront secrets to registry deployment config
  command: oc volume dc/docker-registry --add --name=cloudfront-vol -m {{ target_file_dirname }}  --type=secret --secret-name=cloudfront
  register: result
  failed_when: "'already exists' not in result.stderr and result.rc != 0"

Comment 6 Steve Milner 2017-02-15 14:11:04 UTC
Thanks Johnny Liu! I'll try to get to this today.

Comment 7 Steve Milner 2017-02-15 15:41:56 UTC
Updated PR at https://github.com/openshift/openshift-ansible/pull/3369. Requesting testing from Ryan Cook as well before merging.

Comment 8 Steve Milner 2017-02-15 21:45:36 UTC
Johnny Liu: Updated PR based on Jason DeTiberus and your feedback. Please take a look.

Comment 9 Johnny Liu 2017-02-16 05:22:33 UTC
Add one comment about the PR:
https://github.com/openshift/openshift-ansible/pull/3369/files#r101441171

Comment 10 Steve Milner 2017-02-20 14:25:45 UTC
https://github.com/openshift/openshift-ansible/pull/3369 was merged. Please test.

Comment 11 Johnny Liu 2017-02-21 12:24:29 UTC
Retest this bug with openshift-ansible master branch, and FAIL.

Setting the following line:
openshift_hosted_registry_storage_s3_cloudfront_privatekeyfile={{ lookup('env', 'WORKSPACE') }}/private-openshift-misc/v3-launch-templates/functionality-testing/aos-35/extra-ansible/files/cloudfront.pem


Installation failed as following:
<--snip-->
TASK [openshift_hosted : fail] *************************************************
Tuesday 21 February 2017  04:01:24 +0000 (0:00:00.636)       0:21:22.467 ****** 
skipping: [ec2-54-211-50-213.compute-1.amazonaws.com] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true}

TASK [openshift_hosted : fail] *************************************************
Tuesday 21 February 2017  04:01:24 +0000 (0:00:00.247)       0:21:22.714 ****** 
skipping: [ec2-54-211-50-213.compute-1.amazonaws.com] => {"changed": false, "skip_reason": "Conditional check failed", "skipped": true}

TASK [openshift_hosted : assert] ***********************************************
Tuesday 21 February 2017  04:01:25 +0000 (0:00:00.247)       0:21:22.962 ****** 
ok: [ec2-54-211-50-213.compute-1.amazonaws.com] => {
    "changed": false, 
    "msg": "All assertions passed"
}

TASK [openshift_hosted : Create registry secret for cloudfront] ****************
Tuesday 21 February 2017  04:01:25 +0000 (0:00:00.290)       0:21:23.252 ****** 
fatal: [ec2-54-211-50-213.compute-1.amazonaws.com]: FAILED! => {"changed": false, "failed": true, "msg": "argument contents is of type <type 'dict'> and we were unable to convert to list"}


If set openshift_hosted_registry_storage_s3_cloudfront_privatekeyfile="/tmp/cloudfront.pem", failed with the same error.

Comment 12 Steve Milner 2017-02-21 21:37:06 UTC
After talking with Tim it looks like the error is with the "contents" argument and not an arguments contents.

Comment 13 Steve Milner 2017-02-22 21:52:24 UTC
We did some testing with the HEAD as of f0a32af0548eb309b9bb3bb2c366d35bdfab1847 and had success. The secret was created, volume attached, and registry started with the cloudfront configuration section. It's ready to be tested.

Comment 14 Johnny Liu 2017-02-23 05:54:51 UTC
Retest this bug with latest openshift-ansible git repo with master branch, PASS.

Comment 15 Scott Dodson 2017-02-24 21:21:47 UTC
latest changes merged and built so back to ON_QA

Comment 16 Johnny Liu 2017-02-27 07:21:20 UTC
Re-test this bug with openshift-ansible-3.5.15-1.git.0.8d2a456.el7.noarch, and FAIL.



TASK [openshift_hosted : Add cloudfront secret to the registry volumes] ********
Monday 27 February 2017  06:19:47 +0000 (0:00:01.731)       0:20:32.596 ******* 
fatal: [ec2-184-73-52-163.compute-1.amazonaws.com]: FAILED! => {
    "failed": true
}

MSG:

the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 's3_volume_mount' is undefined

The error appears to have been in '/home/slave3/workspace/Launch Environment Flexy/private-openshift-ansible/roles/openshift_hosted/tasks/registry/storage/s3.yml': line 39, column 5, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:


  - name: Add cloudfront secret to the registry volumes
    ^ here

Comment 17 Steve Milner 2017-02-27 15:14:32 UTC
Looks like a bug was introduced in 7cf5cc14. Will update.

Comment 18 Steve Milner 2017-02-27 15:59:16 UTC
Fix in https://github.com/openshift/openshift-ansible/pull/3504

Comment 19 Steve Milner 2017-02-27 22:07:17 UTC
PR merged.

Comment 21 Johnny Liu 2017-02-28 03:44:32 UTC
This bug's verification is blocked by BZ#1427378.

Comment 22 Johnny Liu 2017-03-02 09:07:02 UTC
Verified this bug with openshift-ansible-3.5.20-1.git.0.5a5fcd5.el7.noarch, and PASS.


1. Set the following lines in your inventory host file:
openshift_hosted_registry_storage_kind=object
openshift_hosted_registry_storage_provider=s3
openshift_hosted_registry_storage_s3_accesskey={{ lookup('env','AWS_ACCESS_KEY_ID') }}
openshift_hosted_registry_storage_s3_secretkey={{ lookup('env','AWS_SECRET_ACCESS_KEY') }}
openshift_hosted_registry_storage_s3_bucket=openshift-qe-registry-testing-bucket1
openshift_hosted_registry_storage_s3_region=us-east-1
openshift_hosted_registry_storage_s3_cloudfront_baseurl=https://xxx.cloudfront.net/
openshift_hosted_registry_storage_s3_cloudfront_privatekeyfile={{ lookup('env', 'WORKSPACE') }}/private-openshift-misc/v3-launch-templates/functionality-testing/aos-35/extra-ansible/files/cloudfront.pem
openshift_hosted_registry_storage_s3_cloudfront_keypairid=xxx

2. Trigger installation, installation is completed successfully.

3. Check:
# oc rsh docker-registry-1-vct81
sh-4.2$ cat /etc/registry/config.yml 
<--snip-->
  storage:
  - name: cloudfront
    options:
      baseurl: https://xxx.cloudfront.net/
      privatekey: /etc/origin/cloudfront.pem
      keypairid: xxx

# oc get po docker-registry-1-vct81 -o yaml
<--snip-->
    volumeMounts:
    - mountPath: /etc/origin
      name: cloudfront-vol
<--snip-->
  - name: cloudfront-vol
    secret:
      defaultMode: 420
      secretName: docker-registry-s3-cloudfront

Following the workaround of comment #6 and #7 mentioned in BZ#1427378, sti build and image push is working well.
  volumes:
<--snip-->

Comment 24 errata-xmlrpc 2017-04-12 18:48:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:0903


Note You need to log in before you can comment on or make changes to this bug.