RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 870768 - 3.1 - multipath? [vdsm] ReconstructMasterDomain fails in ConnectStoragePool - cannot find master domain
Summary: 3.1 - multipath? [vdsm] ReconstructMasterDomain fails in ConnectStoragePool -...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm
Version: 6.3
Hardware: All
OS: Linux
urgent
high
Target Milestone: rc
: ---
Assignee: Federico Simoncelli
QA Contact: Gadi Ickowicz
URL:
Whiteboard: storage
Depends On: 854140
Blocks: 896506
TreeView+ depends on / blocked
 
Reported: 2012-10-28 15:58 UTC by Gadi Ickowicz
Modified: 2022-07-09 05:40 UTC (History)
19 users (show)

Fixed In Version: vdsm-4.9.6-44.0
Doc Type: Bug Fix
Doc Text:
Previously, if the connection between a host and the storage server was blocked, the storage domain was not able to reconnect. The connectStoragePool now rescans iSCSI connections to reactivate storage domains in case of interruption.
Clone Of:
: 896506 (view as bug list)
Environment:
Last Closed: 2012-12-04 19:13:38 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
logs (3.43 MB, application/x-gzip)
2012-10-28 15:58 UTC, Gadi Ickowicz
no flags Details
vdsm + engine logs (1.14 MB, application/x-gzip)
2012-11-05 08:35 UTC, Gadi Ickowicz
no flags Details
vdsm logs (734.49 KB, application/x-gzip)
2012-11-06 15:17 UTC, Gadi Ickowicz
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2012:1508 0 normal SHIPPED_LIVE Important: rhev-3.1.0 vdsm security, bug fix, and enhancement update 2012-12-04 23:48:05 UTC
oVirt gerrit 9274 0 None ABANDONED [wip] sdcache: avoid extra refresh due samplingmethod Never

Description Gadi Ickowicz 2012-10-28 15:58:02 UTC
Created attachment 634565 [details]
logs

Description of problem:
After blocking the connection between the host and the storage server (single host, single storage domain, iscsi), waiting for datacenter to be problematic and the storage domain to be inactive and then unblocking the connection, the storage domain is never able to reconnect.

Version-Release number of selected component (if applicable):
rhevm-3.1.0-22.el6ev.noarch

How reproducible:
100%

Steps to Reproduce:
(reproduced through storageNegative-1host-iscsi automated test)
1. Follow scenario described above

  
Actual results:
After unblocking the connection, the datacenter never returns to active status

Expected results:
Datacenter should return to active status

Additional info:
Logs attached (both in the logs.tar.gz)
173.tar.bz2 - failed run logs
167.tar.bz2 - successful run logs

Comment 4 Gadi Ickowicz 2012-10-30 10:39:58 UTC
I have been able to reproduce this problem consistently with:
vdsm-4.9.6-39.0.el6_3.x86_64

after removing the iptables block the vg is not visible, and the LUN is listed as "failed faulty runnning" in multipath.

Did not reproduce on a host with vdsm-4.9.6-30.0.el6_3.x86_64 on the same engine - leads me to suspect issue is vdsm related, seems similar to multipath problems described in: https://bugzilla.redhat.com/show_bug.cgi?id=854140

Comment 6 Yaniv Kaul 2012-10-30 10:57:51 UTC
Proposing to 3.1.

Comment 7 Dan Kenigsberg 2012-10-31 13:35:45 UTC
shouldn't multipath recover on its own, and find the newly available path to storage? has it ever worked? with which vdsm/multipath/kernel versions?

Which multipath/kernel were used?

Comment 8 Gadi Ickowicz 2012-10-31 14:33:00 UTC
(In reply to comment #7)
> shouldn't multipath recover on its own, and find the newly available path to
> storage? has it ever worked? with which vdsm/multipath/kernel versions?
> 
> Which multipath/kernel were used?

it worked (and still works) on some hosts, some hosts it works some of the time, and other hosts it never works. seems strange. even stranger is that one host that always works and one that never works use the same multipath version: 

device-mapper-multipath-0.4.9-56.el6_3.1.x86_64
device-mapper-multipath-libs-0.4.9-56.el6_3.1.x86_64

kernel version on the host that never recovers is: 2.6.32-279.9.1.el6.x86_64
kernel version on the host that always works is:   2.6.32-279.el6.x86_64

Comment 9 Ayal Baron 2012-11-04 08:09:14 UTC
(In reply to comment #4)
> I have been able to reproduce this problem consistently with:
> vdsm-4.9.6-39.0.el6_3.x86_64
> 
> after removing the iptables block the vg is not visible, and the LUN is
> listed as "failed faulty runnning" in multipath.
> 
> Did not reproduce on a host with vdsm-4.9.6-30.0.el6_3.x86_64 on the same
> engine - leads me to suspect issue is vdsm related, seems similar to

Since you mention above that on some hosts it always reproduces and on some never, have you tried reproducing on a host where it *always* reproduces with vdsm version vdsm-4.9.6-30.0.el6_3.x86_64?

Comment 10 Gadi Ickowicz 2012-11-05 08:35:57 UTC
Created attachment 638400 [details]
vdsm + engine logs

(In reply to comment #9)
> (In reply to comment #4)
> > I have been able to reproduce this problem consistently with:
> > vdsm-4.9.6-39.0.el6_3.x86_64
> > 
> > after removing the iptables block the vg is not visible, and the LUN is
> > listed as "failed faulty runnning" in multipath.
> > 
> > Did not reproduce on a host with vdsm-4.9.6-30.0.el6_3.x86_64 on the same
> > engine - leads me to suspect issue is vdsm related, seems similar to
> 
> Since you mention above that on some hosts it always reproduces and on some
> never, have you tried reproducing on a host where it *always* reproduces
> with vdsm version vdsm-4.9.6-30.0.el6_3.x86_64?

I just ran this test again with vdsm-4.9.6-31.0.el6_3.x86_64 on *another* host that was reproducing consistently with vdsm-4.9.6-39.0.el6_3.x86_64, and on the older version (31) it does not reproduce.

Comment 11 Yaniv Kaul 2012-11-05 12:53:59 UTC
Gadi, please start bisecting between 39 and 31, so we'll have a better chance of understanding where it went in - so try with vdsm -35, and so on.

Comment 12 Gadi Ickowicz 2012-11-06 15:16:22 UTC
(In reply to comment #11)
> Gadi, please start bisecting between 39 and 31, so we'll have a better
> chance of understanding where it went in - so try with vdsm -35, and so on.

I ran 35, 37, 38. they all succeed (that is, the bug DOES NOT REPRODUCE). 39 fails (Reconstruct fails, and setup does not recover after block is removed).

Comment 13 Gadi Ickowicz 2012-11-06 15:17:55 UTC
Created attachment 639410 [details]
vdsm logs

vdsm logs. file includes logs from runs with vdsm -35, -37, -38 and -39.

Comment 16 Federico Simoncelli 2012-11-16 17:48:28 UTC
commit 39f82cf3b999e62cb0df65f4cf5d34cfffeb41f3
Author: Federico Simoncelli <fsimonce>
Date:   Thu Nov 15 12:59:53 2012 -0500

    pool: refresh multipath on connectStoragePool
    
    On connectStoragePool we should rescan the iscsi connections to
    reactivate them in case they were previously interrupted.
    
    Bug-Url: https://bugzilla.redhat.com/show_bug.cgi?id=870768
    Change-Id: Ie26a7a2577b65d3fb70586a849e0245e64344e3b
    Signed-off-by: Federico Simoncelli <fsimonce>

http://gerrit.ovirt.org/#/c/9275/

Comment 18 Gadi Ickowicz 2012-11-26 09:08:13 UTC
Verified using automated test running scenario described above

Comment 20 errata-xmlrpc 2012-12-04 19:13:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2012-1508.html


Note You need to log in before you can comment on or make changes to this bug.