Description of problem: Currently RHEV-M is only rescanning the scsi bus of host which is selected as "Use Host" when adding the FC storage domain. So the LUN will only be available in this host and will fail when attaching the storage domain to Data Center (unless the customer go in each host and scan the scsi bus manually). I think as a part of attach process, we should recsan the scsi bus of every host in the Data Center so that the storage domain attach process should work without any manual intervention. This is working for iscsi where we issue connectStorageServer for each host. Version-Release number of selected component (if applicable): Red Hat Enterprise Virtualization 3.6 rhevm-3.6.5.3-0.1.el6.noarch rhevm-backend-3.6.5.3-0.1.el6.noarchrhevm-backend-3.6.5.3-0.1.el6.noarch How reproducible: 100% Steps to Reproduce: 1. Assign a new FC LUN from the storage domain. 2. Add this to the RHEV-M without manually rescanning the scsi bus 3. Addition of storage domain will be successful. However the attach process fails with error "Storage domain does not exist" Actual results: Attaching storage domain fails with error "Storage domain does not exist" Expected results: Attaching storage domain should work without any manual intervention. Additional info:
Fred, shouldn't this have been solved by bug 1242200?
rhevm-4.0.2.4-0.1.el7ev.noarch vdsm-4.18.10-1.el7ev.x86_64 Tested with the following scenario: Steps to Reproduce: 1. Create an FC domain and select a LUN >>>> The domain is created successfully but the Attach LUN is reported to have failed in the host that was selected in "Use Host". Actual results: The domain is created successfully but the Attach LUN is reported to have failed in the host that was selected in "Use Host". Moving to ASSIGNED! From vdsm.log on the host that was chosen in "Use Host" --------------------------------------------------------- Domain.create' in bridge with {u'name': u'fc_domain', u'domainType': 2, u'domainClass': 1, u'typeArgs': u'FZkAt8-wbM3-R6K0-c txv-Tfpk-RpTO-vPvOHH', u'version': u'3', u'storagedomainID': u'9bd915f2-1937-42b2-a74b-adc473658bbd'} jsonrpc.Executor/6::DEBUG::2016-08-09 19:03:04,235::task::597::Storage.TaskManager.Task::(_updateState) Task=`f8bdcbd5-98c6- 4739-8242-a64d0787c03f`::moving from state init -> state preparing jsonrpc.Executor/6::INFO::2016-08-09 19:03:04,235::logUtils::49::dispatcher::(wrapper) Run and protect: createStorageDomain( storageType=2, sdUUID=u'9bd915f2-1937-42b2-a74b-adc473658bbd', domainName=u'fc_domain', typeSpecificArg=u'FZkAt8-wbM3-R6K0-c txv-Tfpk-RpTO-vPvOHH', domClass=1, domVersion=u'3', options=None) jsonrpc.Executor/6::ERROR::2016-08-09 19:03:04,235::sdc::140::Storage.StorageDomainCache::(_findDomain) looking for unfetche d domain 9bd915f2-1937-42b2-a74b-adc473658bbd jsonrpc.Executor/6::ERROR::2016-08-09 19:03:04,235::sdc::157::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain 9bd915f2-1937-42b2-a74b-adc473658bbd jsonrpc.Executor/6::ERROR::2016-08-09 19:03:04,237::sdc::146::Storage.StorageDomainCache::(_findDomain) domain 9bd915f2-1937 -42b2-a74b-adc473658bbd not found Traceback (most recent call last): File "/usr/share/vdsm/storage/sdc.py", line 144, in _findDomain dom = findMethod(sdUUID) File "/usr/share/vdsm/storage/sdc.py", line 174, in _findUnfetchedDomain raise se.StorageDomainDoesNotExist(sdUUID) StorageDomainDoesNotExist: Storage domain does not exist: (u'9bd915f2-1937-42b2-a74b-adc473658bbd',) jsonrpc.Executor/6::INFO::2016-08-09 19:03:04,237::blockSD::865::Storage.StorageDomain::(create) sdUUID=9bd915f2-1937-42b2-a 74b-adc473658bbd domainName=fc_domain domClass=1 vgUUID=FZkAt8-wbM3-R6K0-ctxv-Tfpk-RpTO-vPvOHH storageType=2 version=3 jsonrpc.Executor/6::DEBUG::2016-08-09 19:03:04,238::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/taskset --cpu-list 0-3 /us r/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_ca che_state=0 disable_after_error_count=3 filter = [ '\''a|/dev/mapper/3514f0c5a5160001f|/dev/mapper/3514f0c5a51600020|/dev/ma pper/3514f0c5a51600021|/dev/mapper/3514f0c5a51600022|/dev/mapper/3514f0c5a51600023|/dev/mapper/3514f0c5a51600024|/dev/mapper /3514f0c5a51600328|/dev/mapper/3514f0c5a51600329|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --sepa rator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_fre e,lv_count,pv_count,pv_name (cwd None) jsonrpc.Executor/6::DEBUG::2016-08-09 19:03:04,338::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = " WARNING: lvmeta d is running but disabled. Restart lvmetad before enabling it!\n Couldn't find device with uuid CVleRk-c492-fpP1-WFEc-N13T- Lu1f-aKQFNf.\n"; <rc> = 0
Created attachment 1189357 [details] vdsm server and engine logs Adding logs
Hi Kevin, Can you explain what exactly failed? What is "Attach LUN" ? Also describe what is the test scenario you have done. Thanks, Fred
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHEA-2016-1743.html