Description of problem: Go through /usr/share/ansible/openshift-ansible/roles/openshift_hosted/templates/registry_config.j2: {% if openshift.hosted.registry.storage.provider == 's3' %} s3: accesskey: {{ openshift.hosted.registry.storage.s3.accesskey }} secretkey: {{ openshift.hosted.registry.storage.s3.secretkey }} region: {{ openshift.hosted.registry.storage.s3.region }} bucket: {{ openshift.hosted.registry.storage.s3.bucket }} encrypt: false secure: true v4auth: true rootdirectory: /registry chunksize: "{{ openshift.hosted.registry.storage.s3.chunksize | default(26214400) }}" The rootdirectory is hardcode to "/registry", it should be customized by user. Version-Release number of selected component (if applicable): openshift-ansible-roles-3.3.11-1.git.4.eac15df.el7.noarch How reproducible: Always Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Commit pushed to master at https://github.com/openshift/openshift-ansible https://github.com/openshift/openshift-ansible/commit/6bf9b60c0e5be8ee9aa2f9bc76cdf5bbfac632b1 Merge pull request #2428 from abutcher/s3-rootdirectory Bug 1367284 - rootdirectory configuration is hardcode when installer is using s3 as registry storage
No available puddle build came out yet, waiting for new puddle include the fix.
Re-test this bug with openshift-ansible-3.3.25-1.git.0.56ee824.el7, and Fail. Seem like the above PR is not merged into the RPM. # vi roles/openshift_hosted/templates/registry_config.j2 <--snip--> {% if openshift.hosted.registry.storage.provider == 's3' %} s3: accesskey: {{ openshift.hosted.registry.storage.s3.accesskey }} secretkey: {{ openshift.hosted.registry.storage.s3.secretkey }} region: {{ openshift.hosted.registry.storage.s3.region }} bucket: {{ openshift.hosted.registry.storage.s3.bucket }} encrypt: false secure: true v4auth: true rootdirectory: /registry chunksize: "{{ openshift.hosted.registry.storage.s3.chunksize | default(26214400) }}" <--snip--> # rpm -qf roles/openshift_hosted/templates/registry_config.j2 openshift-ansible-roles-3.3.25-1.git.0.56ee824.el7.noarch
Sorry, I missed that commit. Updated build in the puddle now.
Verified this bug with openshift-ansible-3.3.26-1.git.0.f4e82a4.el7, and PASS. Set the following line in inventory host file: openshift_hosted_registry_storage_s3_rootdirectory=/installtest # oc rsh docker-registry-2-tiuq9 sh-4.2$ cat /etc/registry/config.yml version: 0.1 log: level: debug http: addr: :5000 storage: cache: blobdescriptor: inmemory s3: accesskey: xxxx secretkey: xxxx region: us-east-1 bucket: yyyy encrypt: false secure: true v4auth: true rootdirectory: /installtest chunksize: "26214400" auth: openshift: realm: openshift middleware: registry: - name: openshift repository: - name: openshift options: pullthrough: True storage: - name: openshift Trigger a sti build, image is pushed to the s3 bucket's correct dir successfully.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1983