Bug 1371023

Summary: Backporting: Haproxy has non-working Horizon session persistence
Product: Red Hat OpenStack Reporter: Chaitanya Shastri <cshastri>
Component: openstack-puppet-modulesAssignee: Michele Baldessari <michele>
Status: CLOSED CURRENTRELEASE QA Contact: Ido Ovadia <iovadia>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 7.0 (Kilo)CC: bperkins, bschmaus, dcadzow, fdinitto, iovadia, jguiditt, jschluet, jslagle, lbezdick, mburns, michele, rhos-flags, rscarazz, srevivo, ushkalim
Target Milestone: ---Keywords: FeatureBackport, Triaged, ZStream
Target Release: 7.0 (Kilo)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-puppet-modules-2015.1.8-53.el7ost Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-07-20 08:21:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Chaitanya Shastri 2016-08-29 07:50:15 UTC
Description of problem:
This bug is already fixed in OSP8: BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1285648.

Upstream BZ: https://bugs.launchpad.net/tripleo/+bug/1526786

This is a request to backport it to OSP7.

Version-Release number of selected component (if applicable):
RHOS7

How reproducible:
Always


Business Justification:

Users shouldn't be disconnected from the dashboard when VIP failover occurs.

Comment 4 Ido Ovadia 2016-09-15 14:52:49 UTC
Failed QA
=========
reproduced with openstack-puppet-modules-2015.1.8-53.el7ost.noarch 

Steps to Reproduce:
=================== 

   1. Install director and build an overcloud environment with
      HA-Controllers(haproxy) configuration.

   2. Access horizon from your browser with virtual IP.

   3. Move to your project and click "Volumes" tab.

   4. Click "Create Volume" and create a new volume.

Actual results:
=============== 

   Horizon doesn't display any pop-up messages in real time.

   After you reload the page, horizon display the
   pop-up message "Info: Creating volume <volume name>".

Comment 7 Jason Guiditta 2016-11-07 14:28:14 UTC
(In reply to Ido Ovadia from comment #4)
> Failed QA
> =========
> reproduced with openstack-puppet-modules-2015.1.8-53.el7ost.noarch 
> 
> Steps to Reproduce:
> =================== 
> 
>    1. Install director and build an overcloud environment with
>       HA-Controllers(haproxy) configuration.
> 
>    2. Access horizon from your browser with virtual IP.
> 
>    3. Move to your project and click "Volumes" tab.
> 
>    4. Click "Create Volume" and create a new volume.
> 
> Actual results:
> =============== 
> 
>    Horizon doesn't display any pop-up messages in real time.
> 
>    After you reload the page, horizon display the
>    pop-up message "Info: Creating volume <volume name>".

The original BZ 1285648 has already been verified in OSP8, this was merely a backport of the same change:

https://review.openstack.org/#/c/258483/2/manifests/loadbalancer.pp

If you look at the haproxy.cfg, does it look like what is expected in the related BZ?  Specifically:
     ------------------------------------------------------------
       server mc0 172.17.1.165:80 cookie AAA check fall 5 inter 2000 rise 2
       server mc1 172.17.1.167:80 cookie BBB check fall 5 inter 2000 rise 2
       server mc2 172.17.1.163:80 cookie CCC check fall 5 inter 2000 rise 2
     ------------------------------------------------------------


If it is not fixed here, it makes me wonder how it could work in the newer version.  Note that upstream puppet-tripleo[1] has no stable/kilo at this point, as it is out of support there, so this is a downstream backport only.


[1] https://github.com/openstack/puppet-tripleo/branches

Comment 8 Benjamin Schmaus 2016-12-04 20:52:59 UTC
Any update since this issue failed QA?  Is the problem larger in scope then just OSP7?

Comment 10 Jason Guiditta 2016-12-13 14:20:11 UTC
Accidentally removed needinfo, fixing.

Comment 11 Michele Baldessari 2017-01-27 08:17:15 UTC
Hi Ido,

could we get more info on comment 7 and also how it was tested in https://bugzilla.redhat.com/show_bug.cgi?id=1285648 ? I ask because that BZ went from ON_QA to ERRATA and there was no VERIFIED and no comment, so I am wondering a bit how things went here. Because as Jason stated this is a straight backport of something that already was verified so am a bit at loss of what is going on here.

Can we maybe work together and set up a reproducer?

Thanks,
Michele

Comment 12 Ido Ovadia 2017-02-07 15:07:33 UTC
(In reply to Michele Baldessari from comment #11)
> Hi Ido,
> 
> could we get more info on comment 7 and also how it was tested in
> https://bugzilla.redhat.com/show_bug.cgi?id=1285648 ? I ask because that BZ
> went from ON_QA to ERRATA and there was no VERIFIED and no comment, so I am
> wondering a bit how things went here. Because as Jason stated this is a
> straight backport of something that already was verified so am a bit at loss
> of what is going on here.
> 
> Can we maybe work together and set up a reproducer?
> 
> Thanks,
> Michele

Sorry, I moved to another project but still got bugs ant needinfoes  :)

I don't have free machines to setup this right now... hopefully next week,sorry.

Anyway, I mentioned the way I reproduced it, as FailedQA on https://bugzilla.redhat.com/show_bug.cgi?id=1371023#c4