Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 890814

Summary: [FC] Failed in CreateVGVDS method on creation master SD in FC DC environment only
Product: Red Hat Enterprise Virtualization Manager Reporter: vvyazmin <vvyazmin>
Component: ovirt-engine-webadmin-portalAssignee: Einav Cohen <ecohen>
Status: CLOSED NOTABUG QA Contact: Pavel Stehlik <pstehlik>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.1.1CC: abaron, amureini, bazulay, dyasny, ecohen, hateya, iheim, lpeer, Rhev-m-bugs, ykaul
Target Milestone: ---   
Target Release: 3.2.0   
Hardware: x86_64   
OS: Linux   
Whiteboard: storage
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-12-30 12:15:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
## Logs vdsm, rhevm none

Description vvyazmin@redhat.com 2012-12-30 11:19:37 UTC
Created attachment 670397 [details]
## Logs vdsm, rhevm

Description of problem:
Get an error: Failed in CreateVGVDS method on creation master SD in FC DC environment only

Version-Release number of selected component (if applicable):
RHEVM 3.2 - SF02 environment 

RHEVM: rhevm-3.2.0-2.el6ev.noarch
VDSM: vdsm-4.10.2-2.0.el6.x86_64
LIBVIRT: libvirt-0.10.2-13.el6.x86_64
QEMU & KVM: qemu-kvm-rhev-0.12.1.2-2.348.el6.x86_64
SANLOCK: sanlock-2.6-2.el6.x86_64

How reproducible:
100 %

Steps to Reproduce:
1. Create FC DC environment with one hosts
2. Create a new SD (master) 
3. Successfully create SD
  
Actual results:
Get an error: Failed in CreateVGVDS method

Expected results:
No errors found

Additional info:

/var/log/ovirt-engine/engine.log
2012-12-30 15:14:32,520 INFO  [org.ovirt.engine.core.bll.storage.AddStoragePoolWithStoragesCommand] (ajp-/127.0.0.1:8702-9) [13a83718] Running command: AddStoragePoolWithStoragesCommand internal: true. Entities affected :  ID: c808fd05-54de-4d33-a574-18282e9a0124 Type: StoragePool
2012-12-30 15:14:32,571 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp-/127.0.0.1:8702-7) [3761fcc3] Failed in CreateVGVDS method
2012-12-30 15:14:32,572 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp-/127.0.0.1:8702-7) [3761fcc3] Error code PhysDevInitializationError and error message VDSGenericException: VDSErrorException: Failed to CreateVGVDS, error = Failed to initialize physical device: ("['/dev/mapper/3514f0c5610000de1']",)
2012-12-30 15:14:32,582 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp-/127.0.0.1:8702-7) [3761fcc3] Command org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand return value 
 Class Name: org.ovirt.engine.core.vdsbroker.irsbroker.OneUuidReturnForXmlRpc
mUuid                         Null
mStatus                       Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
mCode                         601
mMessage                      Failed to initialize physical device: ("['/dev/mapper/3514f0c5610000de1']",)


2012-12-30 15:14:32,582 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (ajp-/127.0.0.1:8702-7) [3761fcc3] HostName = green-vdsa
2012-12-30 15:14:32,582 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (ajp-/127.0.0.1:8702-7) [3761fcc3] Command CreateVGVDS execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateVGVDS, error = Failed to initialize physical device: ("['/dev/mapper/3514f0c5610000de1']",)
2012-12-30 15:14:32,583 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] (ajp-/127.0.0.1:8702-7) [3761fcc3] FINISH, CreateVGVDSCommand, log id: 10a33a1
2012-12-30 15:14:32,583 ERROR [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand] (ajp-/127.0.0.1:8702-7) [3761fcc3] Command org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateVGVDS, error = Failed to initialize physical device: ("['/dev/mapper/3514f0c5610000de1']",)
2012-12-30 15:14:32,631 INFO  [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand] (ajp-/127.0.0.1:8702-7) [3761fcc3] Command [id=0435845a-f869-41f9-a4b8-fe4e90aa9a20]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.StorageDomainDynamic; snapshot: 5a230f5f-c9dd-477d-955d-f199899dc802.
2012-12-30 15:14:32,643 INFO  [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand] (ajp-/127.0.0.1:8702-7) [3761fcc3] Command [id=0435845a-f869-41f9-a4b8-fe4e90aa9a20]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.StorageDomainStatic; snapshot: 5a230f5f-c9dd-477d-955d-f199899dc802.
2012-12-30 15:14:32,648 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStoragePoolVDSCommand] (ajp-/127.0.0.1:8702-9) [13a83718] START, CreateStoragePoolVDSCommand(HostName = green-vdsa, HostId = c610737d-fca9-4595-8da4-83d9be1cd755, storagePoolId=c808fd05-54de-4d33-a574-18282e9a0124, storageType=FCP, storagePoolName=DC-FC-03, masterDomainId=7beb0cfa-5e25-4ff1-a608-f692e8c98961, domainsIdList=[7beb0cfa-5e25-4ff1-a608-f692e8c98961], masterVersion=1), log id: 76e02ef4
2012-12-30 15:14:32,664 ERROR [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand] (ajp-/127.0.0.1:8702-7) [3761fcc3] Transaction rolled-back for command: org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand.

/var/log/vdsm/vdsm.log
Thread-687::ERROR::2012-12-30 15:14:52,060::task::833::TaskManager.Task::(_setError) Task=`355f629d-674d-4275-b047-48fa01c7e12d`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 840, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 600, in getSpmStatus
    pool = self.getPool(spUUID)
  File "/usr/share/vdsm/storage/hsm.py", line 310, in getPool
    raise se.StoragePoolUnknown(spUUID)
StoragePoolUnknown: Unknown pool id, pool not connected: ('c808fd05-54de-4d33-a574-18282e9a0124',)
Thread-687::DEBUG::2012-12-30 15:14:52,061::task::852::TaskManager.Task::(_run) Task=`355f629d-674d-4275-b047-48fa01c7e12d`::Task._run: 355f629d-674d-4275-b047-48fa01c7e12d ('c8
08fd05-54de-4d33-a574-18282e9a0124',) {} failed - stopping task
Thread-687::DEBUG::2012-12-30 15:14:52,061::task::1177::TaskManager.Task::(stop) Task=`355f629d-674d-4275-b047-48fa01c7e12d`::stopping in state preparing (force False)
Thread-687::DEBUG::2012-12-30 15:14:52,061::task::957::TaskManager.Task::(_decref) Task=`355f629d-674d-4275-b047-48fa01c7e12d`::ref 1 aborting True
Thread-687::INFO::2012-12-30 15:14:52,061::task::1134::TaskManager.Task::(prepare) Task=`355f629d-674d-4275-b047-48fa01c7e12d`::aborting: Task is aborted: 'Unknown pool id, pool
 not connected' - code 309
Thread-687::DEBUG::2012-12-30 15:14:52,061::task::1139::TaskManager.Task::(prepare) Task=`355f629d-674d-4275-b047-48fa01c7e12d`::Prepare: aborted: Unknown pool id, pool not conn
ected
Thread-687::DEBUG::2012-12-30 15:14:52,061::task::957::TaskManager.Task::(_decref) Task=`355f629d-674d-4275-b047-48fa01c7e12d`::ref 0 aborting True
Thread-687::DEBUG::2012-12-30 15:14:52,062::task::892::TaskManager.Task::(_doAbort) Task=`355f629d-674d-4275-b047-48fa01c7e12d`::Task._doAbort: force False
Thread-687::DEBUG::2012-12-30 15:14:52,062::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-687::DEBUG::2012-12-30 15:14:52,062::task::568::TaskManager.Task::(_updateState) Task=`355f629d-674d-4275-b047-48fa01c7e12d`::moving from state preparing -> state aborting
Thread-687::DEBUG::2012-12-30 15:14:52,062::task::523::TaskManager.Task::(__state_aborting) Task=`355f629d-674d-4275-b047-48fa01c7e12d`::_aborting: recover policy none
Thread-687::DEBUG::2012-12-30 15:14:52,062::task::568::TaskManager.Task::(_updateState) Task=`355f629d-674d-4275-b047-48fa01c7e12d`::moving from state aborting -> state failed
Thread-687::DEBUG::2012-12-30 15:14:52,062::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-687::DEBUG::2012-12-30 15:14:52,063::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-687::ERROR::2012-12-30 15:14:52,063::dispatcher::66::Storage.Dispatcher.Protect::(run) {'status': {'message': "Unknown pool id, pool not connected: ('c808fd05-54de-4d33-a574-18282e9a0124',)", 'code': 309}}

Comment 1 Ayal Baron 2012-12-30 11:51:23 UTC
The above vdsm excerpt is irrelevant.
The problem is that the PV you're using is already part of another VG.
This was reported properly in getDeviceList.
Wasn't the LUN marked as 'used' in the GUI?
Was this from the GUI or from the REST API?

Force wasn't specified:
Thread-669::INFO::2012-12-30 15:14:25,447::logUtils::37::dispatcher::(wrapper) Run and protect: createVG(vgname='5a230f5f-c9dd-477d-955d-f199899dc802', devlist=['3514f0c5610000de1'], force=False, options=None)

Daniel, can user use a 'used' LUN in the GUI without specifically requesting to force override?

Thread-197397::DEBUG::2012-12-30 14:48:11,580::misc::83::Storage.Misc.excCmd::(<lambda>) FAILED: <err> = '  Can\'t initialize physical volume "/dev/mapper/3514f0c5610000de7" of volume group "a0f53c59-c0e9-4e3b-8e6f-faa19e8a1a66" without -ff\n'; <rc> = 5
Thread-197397::DEBUG::2012-12-30 14:48:11,583::lvm::471::OperationMutex::(_invalidatepvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-197397::DEBUG::2012-12-30 14:48:11,584::lvm::474::OperationMutex::(_invalidatepvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-197397::ERROR::2012-12-30 14:48:11,584::lvm::679::Storage.LVM::(_initpvs) pvcreate failed with rc=5
Thread-197397::ERROR::2012-12-30 14:48:11,584::lvm::680::Storage.LVM::(_initpvs) [], ['  Can\'t initialize physical volume "/dev/mapper/3514f0c5610000de7" of volume group "a0f53c59-c0e9-4e3b-8e6f-faa19e8a1a66" without -ff']