Bug 228530 - the VIP doesnt go down, where the first service deactivated
Summary: the VIP doesnt go down, where the first service deactivated
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Cluster Suite
Classification: Retired
Component: piranha
Version: 4
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Marek Grac
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks: 249312
TreeView+ depends on / blocked
 
Reported: 2007-02-13 16:54 UTC by Bryn M. Reeves
Modified: 2018-10-19 21:13 UTC (History)
2 users (show)

Fixed In Version: RHBA-2008-0794
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2008-07-25 19:08:49 UTC
Embargoed:


Attachments (Terms of Use)
Add check for inactive services to pulse's deactivateLvs (498 bytes, patch)
2007-02-13 16:54 UTC, Bryn M. Reeves
no flags Details | Diff
example lvs.cf that reproduces the problem (1.68 KB, text/plain)
2007-02-13 17:23 UTC, Bryn M. Reeves
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2008:0794 0 normal SHIPPED_LIVE piranha bug fix and enhancement update 2008-07-25 19:08:36 UTC

Description Bryn M. Reeves 2007-02-13 16:54:45 UTC
Description of problem:
This is related to bug 123342. The patch for that bug added two identical guards
in activateFOSMonitors and sendLvsArps:

+	      if (!config->failoverServices[j].isActive)
+		continue;
+

This causes us to skip inactive services so that we do not incorrectly think
their VIPs are already active if they are used by other services.

A similar problem exists in deactivateLvs:

  /* deactivate the interfaces */
  for (i = 0; i < config->numVirtServers; i++)
    { 

      if (config->virtServers[i].failover_service)
        { 
          piranha_log (flags, (char *) "Warning; skipping failover service");
          continue;             /* This should not be possbile anymore */
        } 
  
      for (j = 0; j < i; j++)
        { 
          if (!memcmp (&config->virtServers[i].virtualAddress,
                       &config->virtServers[j].virtualAddress,
                       sizeof (config->virtServers[i].virtualAddress)))
            break;
        }

      if (j == i)
        disableInterface (config->virtServers[i].virtualDevice, flags);
    }

In the inner loop, we will incorrectly break and avoid deactivating the
interface in the case that virtServers[j] is inactive but its virtualAddress
matches the other service.

This needs another check to see if virtServers[j] is inactive and continue if
that is the case.

This problem causes the VIP to remain active on the LVS router that is shutting
down, leading to it being active on both the primary and backup router in the
case of a failover.

Version-Release number of selected component (if applicable):
piranha-0.8.3-1

How reproducible:
100%

Steps to Reproduce:
1. Create an LVS configuration with at least two virtual servers sharing a
single VIP
2. Disable the first service by setting "active = 0" in lvs.cf
3. Start the pulse service on both primary & backup routers
4. VIP should start correctly on primary
5. Stop pulse on the primary router
6. Confirm that VIP has been failed over to the backup router

Actual results:
The VIP is active on both primary and backup LVS routers

Expected results:
The VIP is active only on one router at a time (the backup router in this example).

Additional info:
The same effect is seen when failing back to the primary by re-starting pulse on
the primary router then stopping pulse on the backup router.

Comment 1 Bryn M. Reeves 2007-02-13 16:54:46 UTC
Created attachment 148004 [details]
Add check for inactive services to pulse's deactivateLvs

Comment 2 Bryn M. Reeves 2007-02-13 17:23:12 UTC
Created attachment 148008 [details]
example lvs.cf that reproduces the problem

Comment 4 Lon Hohberger 2007-06-14 19:53:47 UTC
Reassigning to component owner

Comment 6 Marek Grac 2007-07-23 19:01:54 UTC
Patch is in the CVS branch RHEL4

Comment 10 errata-xmlrpc 2008-07-25 19:08:49 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2008-0794.html



Note You need to log in before you can comment on or make changes to this bug.