Bug 1353430 - RHEV-M should rescan the scsi bus when creating and attaching a new FC storage domain
Summary: RHEV-M should rescan the scsi bus when creating and attaching a new FC storag...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.6.5
Hardware: All
OS: Unspecified
unspecified
medium
Target Milestone: ovirt-4.0.2
: 4.0.2
Assignee: Fred Rolland
QA Contact: Kevin Alon Goldblatt
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-07-07 04:40 UTC by nijin ashok
Modified: 2019-11-14 08:41 UTC (History)
12 users (show)

Fixed In Version: ovirt-engine-4.0.2.2
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-08-23 20:43:46 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
vdsm server and engine logs (754.63 KB, application/x-gzip)
2016-08-09 16:37 UTC, Kevin Alon Goldblatt
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2016:1743 0 normal SHIPPED_LIVE Red Hat Virtualization Manager 4.0 GA Enhancement (ovirt-engine) 2016-09-02 21:54:01 UTC
oVirt gerrit 60582 0 master MERGED engine: ConnectStorageServer on attach FC SD 2016-07-26 10:19:22 UTC
oVirt gerrit 61389 0 ovirt-engine-4.0 MERGED engine: ConnectStorageServer on attach FC SD 2016-07-26 11:19:00 UTC
oVirt gerrit 61392 0 ovirt-engine-4.0.2 MERGED engine: ConnectStorageServer on attach FC SD 2016-07-26 11:19:12 UTC

Description nijin ashok 2016-07-07 04:40:34 UTC
Description of problem:

Currently RHEV-M is only rescanning the scsi bus of host which is selected as "Use Host" when adding the FC storage domain. So the LUN will only be available in this host and will fail when attaching the storage domain to Data Center (unless the customer go in each host and scan the scsi bus manually).

I think as a part of attach process, we should recsan the scsi bus of every host in the Data Center so that the storage domain attach process should work without any manual intervention.

This is working for iscsi where we issue  connectStorageServer for each host. 

Version-Release number of selected component (if applicable):

Red Hat Enterprise Virtualization 3.6
rhevm-3.6.5.3-0.1.el6.noarch
rhevm-backend-3.6.5.3-0.1.el6.noarchrhevm-backend-3.6.5.3-0.1.el6.noarch

How reproducible:

100%

Steps to Reproduce:

1. Assign a new FC LUN from the storage domain.

2. Add this to the RHEV-M without manually rescanning the scsi bus

3. Addition of storage domain will be successful. However the attach process fails with error "Storage domain does not exist"

Actual results:

Attaching storage domain fails with error "Storage domain does not exist"

Expected results:

Attaching storage domain should work without any manual intervention. 

Additional info:

Comment 3 Allon Mureinik 2016-07-07 12:07:06 UTC
Fred, shouldn't this have been solved by bug 1242200?

Comment 7 Kevin Alon Goldblatt 2016-08-09 16:20:17 UTC
rhevm-4.0.2.4-0.1.el7ev.noarch
vdsm-4.18.10-1.el7ev.x86_64

Tested with the following scenario:


Steps to Reproduce:
1. Create an FC domain and select a LUN >>>> The domain is created successfully but the Attach LUN is reported to have failed in the host that was selected in "Use Host".

Actual results:
The domain is created successfully but the Attach LUN is reported to have failed in the host that was selected in "Use Host".


Moving to ASSIGNED! 



From vdsm.log on the host that was chosen in "Use Host"
---------------------------------------------------------
Domain.create' in bridge with {u'name': u'fc_domain', u'domainType': 2, u'domainClass': 1, u'typeArgs': u'FZkAt8-wbM3-R6K0-c
txv-Tfpk-RpTO-vPvOHH', u'version': u'3', u'storagedomainID': u'9bd915f2-1937-42b2-a74b-adc473658bbd'}
jsonrpc.Executor/6::DEBUG::2016-08-09 19:03:04,235::task::597::Storage.TaskManager.Task::(_updateState) Task=`f8bdcbd5-98c6-
4739-8242-a64d0787c03f`::moving from state init -> state preparing
jsonrpc.Executor/6::INFO::2016-08-09 19:03:04,235::logUtils::49::dispatcher::(wrapper) Run and protect: createStorageDomain(
storageType=2, sdUUID=u'9bd915f2-1937-42b2-a74b-adc473658bbd', domainName=u'fc_domain', typeSpecificArg=u'FZkAt8-wbM3-R6K0-c
txv-Tfpk-RpTO-vPvOHH', domClass=1, domVersion=u'3', options=None)
jsonrpc.Executor/6::ERROR::2016-08-09 19:03:04,235::sdc::140::Storage.StorageDomainCache::(_findDomain) looking for unfetche
d domain 9bd915f2-1937-42b2-a74b-adc473658bbd
jsonrpc.Executor/6::ERROR::2016-08-09 19:03:04,235::sdc::157::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for
 domain 9bd915f2-1937-42b2-a74b-adc473658bbd
jsonrpc.Executor/6::ERROR::2016-08-09 19:03:04,237::sdc::146::Storage.StorageDomainCache::(_findDomain) domain 9bd915f2-1937
-42b2-a74b-adc473658bbd not found
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/sdc.py", line 144, in _findDomain
    dom = findMethod(sdUUID)
  File "/usr/share/vdsm/storage/sdc.py", line 174, in _findUnfetchedDomain
    raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: (u'9bd915f2-1937-42b2-a74b-adc473658bbd',)
jsonrpc.Executor/6::INFO::2016-08-09 19:03:04,237::blockSD::865::Storage.StorageDomain::(create) sdUUID=9bd915f2-1937-42b2-a
74b-adc473658bbd domainName=fc_domain domClass=1 vgUUID=FZkAt8-wbM3-R6K0-ctxv-Tfpk-RpTO-vPvOHH storageType=2 version=3
jsonrpc.Executor/6::DEBUG::2016-08-09 19:03:04,238::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/taskset --cpu-list 0-3 /us
r/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_ca
che_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/3514f0c5a5160001f|/dev/mapper/3514f0c5a51600020|/dev/ma
pper/3514f0c5a51600021|/dev/mapper/3514f0c5a51600022|/dev/mapper/3514f0c5a51600023|/dev/mapper/3514f0c5a51600024|/dev/mapper
/3514f0c5a51600328|/dev/mapper/3514f0c5a51600329|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1
  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --noheadings --units b --nosuffix --sepa
rator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_fre
e,lv_count,pv_count,pv_name (cwd None)
jsonrpc.Executor/6::DEBUG::2016-08-09 19:03:04,338::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = "  WARNING: lvmeta
d is running but disabled. Restart lvmetad before enabling it!\n  Couldn't find device with uuid CVleRk-c492-fpP1-WFEc-N13T-
Lu1f-aKQFNf.\n"; <rc> = 0

Comment 8 Kevin Alon Goldblatt 2016-08-09 16:37:30 UTC
Created attachment 1189357 [details]
vdsm server and engine logs

Adding logs

Comment 9 Fred Rolland 2016-08-10 10:54:45 UTC
Hi Kevin,

Can you explain what exactly failed? What is "Attach LUN" ?

Also describe what is the test scenario you have done.

Thanks,

Fred

Comment 12 errata-xmlrpc 2016-08-23 20:43:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2016-1743.html


Note You need to log in before you can comment on or make changes to this bug.