Bug 1413718 - Logging upgrade fails
Summary: Logging upgrade fails
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.4.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: ---
Assignee: Jeff Cantrill
QA Contact: Xia Zhao
Depends On:
TreeView+ depends on / blocked
Reported: 2017-01-16 19:12 UTC by Wesley Hearn
Modified: 2020-12-14 08:01 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2017-02-02 21:40:55 UTC
Target Upstream Version:

Attachments (Terms of Use)
successful upgrade log on clean installed OCP (162.82 KB, text/plain)
2017-01-17 05:19 UTC, Xia Zhao
no flags Details
successful upgrade log on upgraded OCP 3.4.1 (118.41 KB, text/plain)
2017-01-20 07:14 UTC, Xia Zhao
no flags Details

Description Wesley Hearn 2017-01-16 19:12:29 UTC
Description of problem:
Passing mode=upgrade to the template fails. It seems to not clean up the existing route and so the upgrade bails out with all logging components deleted. 

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Install logging 3.3.0 on a 3.3.X OCP cluster
2. Upgrade OCP cluster to 3.4.X
3. Rerun the logging template passing in mode=upgrade.

Actual results:
Error from server: routes "logging-kibana" already exists

Expected results:

Additional info:

Comment 2 Xia Zhao 2017-01-17 05:18:38 UTC
Issue didn't repro on clean installed OCP env, attached the successful deployer log upgrading the logging stacks from 3.3.1 to 3.4.0.

Comment 3 Xia Zhao 2017-01-17 05:19:20 UTC
Created attachment 1241541 [details]
successful upgrade log on clean installed OCP

Comment 4 Jeff Cantrill 2017-01-17 18:06:04 UTC
@xia  Please clarify what you mean by "clean install of OCP env".  Are you saying you folloed @wesleys setup and had no issues?

Comment 5 Jeff Cantrill 2017-01-17 19:42:49 UTC
Using origin equivalent:

* Installed 1.3.2 logging using https://github.com/openshift/origin-aggregated-logging/tree/v1.3.2/deployer
* Upgraded to 1.4.2 using:  oc new-app logging-deployer-template -p MODE=upgrade -p IMAGE_VERSION=v1.4.0-rc
* Successfully upgraded stack without failure.

Comment 6 Xia Zhao 2017-01-18 09:52:42 UTC
(In reply to Jeff Cantrill from comment #4)
> @xia  Please clarify what you mean by "clean install of OCP env".  Are you
> saying you folloed @wesleys setup and had no issues?
Hi Jeff,

I meant I did the following test:
1. Install logging 3.3.1 on OCP 3.4.0 cluster
2. Upgraded logging 3.3.1 stacks to 3.4.0 stacks there with deployer mode=upgrade

Please note that @wesleys is actually upgrading OCP cluster, so our steps are not exactly the same.

Comment 8 Xia Zhao 2017-01-19 07:48:37 UTC
@whearn According to https://bugzilla.redhat.com/show_bug.cgi?id=1413714#c2, can you confirm if you are also upgrading logging stacks by ansible here? If yes, this bz should be assigned to component=installer as well.

Comment 9 Jeff Cantrill 2017-01-19 14:44:54 UTC
@xia  Given this is 3.4, this should still be using the deployer for deployment

Comment 10 Wesley Hearn 2017-01-19 14:51:33 UTC
@Xia these roles are OPs managed and not supported, it is how we automated the install. At the core we are using the instructions on docs.openshift.com.


Comment 11 Xia Zhao 2017-01-20 07:13:14 UTC
Upgraded OCP from 3.3.1 to 3.4.1, the issue can not be reproduced. Attaching the successful upgrade log.

Comment 12 Xia Zhao 2017-01-20 07:14:00 UTC
Created attachment 1242661 [details]
successful upgrade log on upgraded OCP 3.4.1

Comment 13 Jeff Cantrill 2017-02-01 15:16:11 UTC
I am unable to reproduce. Followed:

1. install ose 3.3
2. install logging 3.3 https://docs.openshift.com/container-platform/3.3/install_config/aggregate_logging.html	
3. upgrade to atomic-openshift-
4. upgrade by grabbing 3.4 deployer template
5. oc new-app logging-deployer-template --param MODE=upgrade --param IMAGE_VERSION=3.4.1

@Wesley is there an environment where we can reproduce?  I am otherwise included to close as unable to reproduce.

Comment 14 Jeff Cantrill 2017-02-02 21:40:55 UTC
Per my IRC discussion with Wesley, they have not reproduced since they are using a different avenue to re-install.  It's ok to close.

Note You need to log in before you can comment on or make changes to this bug.