Created attachment 1255995 [details] Wrong port used when logging in Kibana Description of problem: Log in Kibana after logging stacks was successfully deployed via ansible, the redirect URL of Kibana route took wrong port number when nevigating from master server. In this case, master server port is 443 in master-config.yaml, but server use 8443 port wrongly, from the attached ansible log, we also could find it use 8443 port. If we change the port from 8443 to 443, then kibana UI could be accessed. # oc get route NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD logging-kibana kibana.0221-ftm.qe.rhcloud.com logging-kibana <all> reencrypt None Version-Release number of selected component (if applicable): # openshift version openshift v3.5.0.32-1+4f84c83 kubernetes v1.5.2+43a9be4 etcd 3.1.0 Image id: openshift3/logging-elasticsearch d715f4d34ad4 openshift3/logging-kibana e0ab09c2cbeb openshift3/logging-fluentd 47057624ecab openshift3/logging-auth-proxy 139f7943475e openshift3/logging-curator 7f034fdf7702 How reproducible: Always Steps to Reproduce: 1. Prepate inventory file: [oo_first_master] $master ansible_user=root ansible_ssh_user=root ansible_ssh_private_key_file="~/libra.pem" openshift_public_hostname=$master [oo_first_master:vars] deployment_type=openshift-enterprise openshift_release=v3.5.0 openshift_logging_install_logging=true openshift_logging_kibana_hostname=kibana.$subdomain public_master_url=https://$master:443 openshift_logging_fluentd_hosts=$node openshift_logging_image_prefix=$registry/openshift3/ openshift_logging_image_version=3.5.0 openshift_logging_namespace=logging openshift_logging_fluentd_use_journal=true openshift_logging_use_ops=false 2. Use playbooks from https://github.com/openshift/openshift-ansible/ to deploy logging stacks. Actual results: Kibana route take the wrong port number, browser return error: Unable to connect Firefox can't establish a connection to the server at ec2-54-165-25-191.compute-1.amazonaws.com:8443. Expected results: kibana route could be accessible without error. Additional info: Ansible execution log and server master-config.yaml attached
Created attachment 1255997 [details] master-config.yaml
Created attachment 1255998 [details] fully ansible running log
From https://github.com/openshift/openshift-ansible/tree/master/roles/openshift_logging find the following message openshift_logging_master_public_url: The public facing URL for the Kubernetes master, this is used for Authentication redirection. Defaults to 'https://{{openshift.common.public_hostname}}:8443'. Maybe this defect is relate to it, ansible will always use default port 8443 as port number, it is not a variable.
fixed in https://github.com/openshift/openshift-ansible/pull/3438
Verified with the latest openshift-ansible playbooks, ansible still use 8443 port instead of 443 port, although 'PLAY RECAP' showed the whole work failed, but all pods are generated successfully and kibana UI could be accessed if we change the port from 8443 to 443 Attached the fully ansilbe running log, Failed at 'Verify API Server' and it retried for 120 times, I think the retry number is too big, and from the message, it aslo used 8443 port FAILED - RETRYING: HANDLER: openshift_logging : Verify API Server (120 retries left).Result was: { "attempts": 1, "changed": false, "cmd": [ "curl", "--silent", "--tlsv1.2", "--cacert", "/etc/origin/master/ca-bundle.crt", "https://ip-172-18-1-11.ec2.internal:8443/healthz/ready" ], "delta": "0:00:00.010390", "end": "2017-02-22 00:43:54.082867", "failed": true, "invocation": { "module_args": { "_raw_params": "curl --silent --tlsv1.2 --cacert /etc/origin/master/ca-bundle.crt https://ip-172-18-1-11.ec2.internal:8443/healthz/ready", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "warn": false }, "module_name": "command" }, "rc": 7, "retries": 121, "start": "2017-02-22 00:43:54.072477", "warnings": [] }
Created attachment 1256345 [details] ansible log shows it still use 8443 port
Jason, Looking through the log there is some issue in the fact gathering where the api_url switches from: "api_port": "443", "api_url": "https://ip-172-18-1-11.ec2.internal", to: "api_port": "8443", "api_url": "https://ip-172-18-1-11.ec2.internal:8443", where the former is reported as the correct one. Thoughts?
It's failed to upgrade from Logging 3.4.1 to 3.5.0, same reason as this defect, it wrongly used 8443 port instead of 443 port, and Kibana UI shows "Application is not available" error, see the snapshot
Created attachment 1257159 [details] "Application is not available" error in Kibana after upgrading logging from 3.4.1 to 3.5.0
I believe this was fixed in https://github.com/openshift/openshift-ansible/pull/3438/
Jeff, this bz was moved back to ASSIGNED but the openshift-ansible PR was merged - does that mean the openshift-ansible PR did not fix the problem?
Rich, I'm trying to figure out why https://bugzilla.redhat.com/show_bug.cgi?id=1425312#c7 is happening still.
@ewolinet, Verified with your fix, test passed, kibana can take the right port number now.
Close this defect according to Comment 15
Tested on AWS, this issue was reproduced. Version-Release number of selected component: openshift-ansible-3.5.28-1.git.0.103513e.el7.noarch Note: openshift-ansible and playbooks are yum installed. according to your fix https://github.com/openshift/openshift-ansible/pull/3550 file 'roles/openshift_logging/meta/main.yaml' should be dependencies: - role: lib_openshift - role: openshift_master_facts but it still wrong in openshift-ansible-playbooks-3.5.28-1.git.0.103513e.el7.noarch dependencies: - role: lib_openshift - role: openshift_master_facts - role: openshift_facts
(In reply to Junqi Zhao from comment #18) > file 'roles/openshift_logging/meta/main.yaml' > should be > > dependencies: > - role: lib_openshift > - role: openshift_facts > but it still wrong in > openshift-ansible-playbooks-3.5.28-1.git.0.103513e.el7.noarch > dependencies: > - role: lib_openshift > - role: openshift_master_facts > - role: openshift_facts
https://github.com/openshift/openshift-ansible/pull/3644
additional changes merged into openshift-ansible-3.5.31, ON_QA
blocked by https://bugzilla.redhat.com/show_bug.cgi?id=1431935, will verify it after BZ # 1431935 get fixed.
The reference issue only appears to block because it has an invalid value. Please update the 'openshift_logging_es_pvc_pool' to contain a number followed by a letter (e.g 1G, 1m)
Verified according to the fix of https://bugzilla.redhat.com/show_bug.cgi?id=1431935, this issue is fixed. I did not set openshift_logging_es_pvc_pool in inventory file, and we don't have such parameter in https://github.com/openshift/openshift-ansible/tree/release-1.5/roles/openshift_logging attached the inventory file
Created attachment 1263166 [details] ansible inventory file, not set openshift_logging_es_pvc_pool
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0903
Problem was that OAP_PUBLIC_MASTER_URL for the Kibana deployment config was wrong.