Bug 829428 - 3.1 - vdsm: reconstruct master domain will fail when the domains are located on different storage servers
3.1 - vdsm: reconstruct master domain will fail when the domains are located ...
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm (Show other bugs)
x86_64 Linux
high Severity urgent
: rc
: ---
Assigned To: Eduardo Warszawski
Dafna Ron
: Reopened, TestBlocker
Depends On:
Blocks: 788904
  Show dependency treegraph
Reported: 2012-06-06 13:16 EDT by Dafna Ron
Modified: 2014-07-01 07:58 EDT (History)
6 users (show)

See Also:
Fixed In Version: vdsm-4.9.6-24.0.el6_3
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2012-12-05 02:45:23 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
logs (584.45 KB, application/x-gzip)
2012-06-06 13:18 EDT, Dafna Ron
no flags Details

  None (edit)
Description Dafna Ron 2012-06-06 13:16:05 EDT
Description of problem:

reconstruct master domain fails during conection problem from host to storage when the domains are located on 2 different storage servers. 

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. create two domains each domain should be located on a different storage server
2. block connectivity to the master domain from the host using iptables
Actual results:

reconstruct master domain will fail 

Expected results:

we should succeed to reconstruct master

Additional info: logs attached

after speaking with Edu, this was probably caused by a patch which was reverted and we will probably not see it in vdsm-14. 
but, we need to test this once the new vdsm will come out and this issue is also a TestBlocker for current vdsm. 

600: Input/output error\n  WARNING: Error counts reached a limit of 3. Device /dev/sdk was disabled\n  Couldn't find device with uuid KERvf0-hvTO-AEGm-pFui-bQLd-UNTY-7kTExm.\n"; <rc> = 0
MainThread::DEBUG::2012-06-03 19:56:51,704::lvm::356::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex
MainThread::ERROR::2012-06-03 19:56:51,704::clientIF::162::vds::(_initIRS) Error initializing IRS
Traceback (most recent call last):
  File "/usr/share/vdsm/clientIF.py", line 160, in _initIRS
    self.irs = Dispatcher(HSM())
  File "/usr/share/vdsm/storage/hsm.py", line 298, in __init__
  File "/usr/share/vdsm/storage/lvm.py", line 324, in bootstrap
  File "/usr/share/vdsm/storage/lvm.py", line 346, in _reloadpvs
    pv = makePV(*fields)
  File "/usr/share/vdsm/storage/lvm.py", line 200, in makePV
    name = fixPVName(args[1])
  File "/usr/share/vdsm/storage/lvm.py", line 195, in fixPVName
    dmId = devicemapper.getDmIdFromFile(devPath)
  File "/usr/share/vdsm/storage/devicemapper.py", line 35, in getDmIdFromFile
    raise OSError(errno.ENODEV, "Could not find dm device named `%s`" % path)
Comment 1 Dafna Ron 2012-06-06 13:18:43 EDT
Created attachment 589963 [details]
Comment 3 Eduardo Warszawski 2012-07-22 12:56:41 EDT
fixPVNa,e was reverted by:
Change-Id: Iac8d97df92f67f8dd91c37817146d117889b4a13

Comment 4 Dafna Ron 2012-07-31 04:41:20 EDT
verified on vdsm-4.9.6-24.0.el6_3.x86_64 si12

Note You need to log in before you can comment on or make changes to this bug.