Bug 1275381 - Error while executing action New SAN Storage Domain: Cannot zero out volume
Summary: Error while executing action New SAN Storage Domain: Cannot zero out volume
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: General
Version: 3.5.5
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ovirt-3.6.1
: 3.6.1
Assignee: Ala Hino
QA Contact: Aharon Canan
URL:
Whiteboard: storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-10-26 17:46 UTC by Devin Bougie
Modified: 2016-02-10 19:21 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: vdsm user defined in external identity store and wasn't created by vdsm Consequence: udev didn't apply permissions to sd paths hence, access to sd paths resulted in permission deny error Fix: Let vdsm create vdsm user locally or manually load udev rules by running "udevadm control --reload" Result: Permissions are applied by udev and accessing sd succeeds IMPORTANT: The recommended configuration is to let vdsm create vdsm user locally. Otherwise, the results may be unexpected as in this bug for example, permission weren't set appropriately.
Clone Of:
Environment:
Last Closed: 2015-11-10 06:37:07 UTC
oVirt Team: Storage
Embargoed:
tnisan: ovirt-3.6.z?
tnisan: ovirt-4.0.0?
rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)
engine.log (4.51 MB, text/plain)
2015-10-27 16:20 UTC, Devin Bougie
no flags Details
vdsm.log (11.06 MB, text/plain)
2015-10-27 16:21 UTC, Devin Bougie
no flags Details
installed packages (96.82 KB, text/plain)
2015-11-02 17:16 UTC, Devin Bougie
no flags Details
udev logs (3.45 KB, text/plain)
2015-11-02 17:17 UTC, Devin Bougie
no flags Details
udev logs (207.67 KB, text/plain)
2015-11-03 16:41 UTC, Devin Bougie
no flags Details

Description Devin Bougie 2015-10-26 17:46:04 UTC
Every time I try to create a Data / iSCSI Storage Domain, I receive an "Error while executing action New SAN Storage Domain: Cannot zero out volume" error.

iscsid does login to the node, and the volumes appear to have been created.  However, I cannot use it to create or import a Data / iSCSI storage domain.

This is on EL7.1 with selinux disabled.

[root@lnx84 ~]# iscsiadm -m node
#.#.#.#:3260,1 iqn.2015-10.N.N.N.lnx88:lnx88.target1

[root@lnx84 ~]# iscsiadm -m session
tcp: [1] #.#.#.#:3260,1 iqn.2015-10.N.N.N.lnx88:lnx88.target1 (non-flash)

[root@lnx84 ~]# pvscan
 PV /dev/mapper/1IET_00010001   VG f73c8720-77c3-42a6-8a29-9677db54bac6   lvm2 [547.62 GiB / 543.75 GiB free]
...
[root@lnx84 ~]# lvscan
 inactive          '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/metadata' [512.00 MiB] inherit
 inactive          '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/outbox' [128.00 MiB] inherit
 inactive          '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/leases' [2.00 GiB] inherit
 inactive          '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/ids' [128.00 MiB] inherit
 inactive          '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/inbox' [128.00 MiB] inherit
 inactive          '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/master' [1.00 GiB] inherit
...

------
As suggested on the ovirt users mailing list, zero'ing out the metadata volume with dd does succeed.

[root@lnx84 ~]# ls -l /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/
total 0
lrwxrwxrwx 1 root root 8 Oct 23 16:05 ids -> ../dm-23
lrwxrwxrwx 1 root root 8 Oct 23 16:05 inbox -> ../dm-24
lrwxrwxrwx 1 root root 8 Oct 23 16:05 leases -> ../dm-22
lrwxrwxrwx 1 root root 8 Oct 23 16:05 metadata -> ../dm-20
lrwxrwxrwx 1 root root 8 Oct 23 16:05 outbox -> ../dm-21

[root@lnx84 ~]# /usr/bin/nice -n 19 /usr/bin/ionice -c 3 /usr/bin/dd if=/dev/zero of=/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata bs=1048576 seek=0 skip=0 conv=notrunc count=40 oflag=direct
40+0 records in
40+0 records out
41943040 bytes (42 MB) copied, 0.435552 s, 96.3 MB/s
------

Any help would be greatly appreciated.

Many thanks,
Devin

Here are the relevant lines from engine.log:
------
2015-10-23 16:04:56,925 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand] (ajp--127.0.0.1-8702-8) START, GetDeviceListVDSCommand(HostName = lnx84, HostId = a650e161-75f6-4916-bc18-96044bf3fc26, storageType=ISCSI), log id: 44a64578
2015-10-23 16:04:57,681 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand] (ajp--127.0.0.1-8702-8) FINISH, GetDeviceListVDSCommand, return: [LUNs [id=1IET_00010001, physicalVolumeId=wpmBIM-tgc1-yKtH-XSwc-40wZ-Kn49-btwBFn, volumeGroupId=8gZEwa-3x5m-TiqA-uEPX-gC04-wkzx-PlaQDu, serial=SIET_VIRTUAL-DISK, lunMapping=1, vendorId=IET, productId=VIRTUAL-DISK, _lunConnections=[{ id: null, connection: #.#.#.#, iqn: iqn.2015-10.N.N.N.lnx88:lnx88.target1, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };], deviceSize=547, vendorName=IET, pathsDictionary={sdi=true}, lunType=ISCSI, status=Used, diskId=null, diskAlias=null, storageDomainId=null, storageDomainName=null]], log id: 44a64578
2015-10-23 16:05:06,474 INFO  [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] Running command: AddSANStorageDomainCommand internal: false. Entities affected :  ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN
2015-10-23 16:05:06,488 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] START, CreateVGVDSCommand(HostName = lnx84, HostId = a650e161-75f6-4916-bc18-96044bf3fc26, storageDomainId=cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19, deviceList=[1IET_00010001], force=true), log id: 12acc23b
2015-10-23 16:05:07,379 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] FINISH, CreateVGVDSCommand, return: dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P, log id: 12acc23b
2015-10-23 16:05:07,384 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] START, CreateStorageDomainVDSCommand(HostName = lnx84, HostId = a650e161-75f6-4916-bc18-96044bf3fc26, storageDomain=StorageDomainStatic[lnx88, cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19], args=dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P), log id: cc93ec6
2015-10-23 16:05:10,356 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] Failed in CreateStorageDomainVDS method
2015-10-23 16:05:10,360 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] Command org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand return value 
StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=374, mMessage=Cannot zero out volume: ('/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata',)]]
2015-10-23 16:05:10,367 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] HostName = lnx84
2015-10-23 16:05:10,370 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] Command CreateStorageDomainVDSCommand(HostName = lnx84, HostId = a650e161-75f6-4916-bc18-96044bf3fc26, storageDomain=StorageDomainStatic[lnx88, cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19], args=dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStorageDomainVDS, error = Cannot zero out volume: ('/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata',), code = 374
2015-10-23 16:05:10,381 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] FINISH, CreateStorageDomainVDSCommand, log id: cc93ec6
2015-10-23 16:05:10,385 ERROR [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] Command org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStorageDomainVDS, error = Cannot zero out volume: ('/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata',), code = 374 (Failed with error VolumesZeroingError and code 374)
2015-10-23 16:05:10,394 INFO  [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] Command [id=624f979b-fa98-4b1e-9244-c2c2c675d91b]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.StorageDomainDynamic; snapshot: cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19.
2015-10-23 16:05:10,395 INFO  [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] Command [id=624f979b-fa98-4b1e-9244-c2c2c675d91b]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.StorageDomainStatic; snapshot: cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19.
2015-10-23 16:05:10,422 ERROR [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] Transaction rolled-back for command: org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand.
2015-10-23 16:05:10,430 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-8) [53dd8c98] Correlation ID: 53dd8c98, Job ID: 7c09ed86-b0b3-448e-8ae1-50b4cce57508, Call Stack: null, Custom Event ID: -1, Message: Failed to add Storage Domain lnx88. (User: admin@internal)

------

And here's some of what I see in vdsm.log.  Note that I'm trying to add the domain on the node that the engine is running on.

------
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:38,546::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:38,548::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1838::DEBUG::2015-10-23 16:04:38,551::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:41,558::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:41,560::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1839::DEBUG::2015-10-23 16:04:41,563::task::595::Storage.TaskManager.Task::(_updateState) Task=`e649893f-f5b1-423a-a529-7def05ced600`::moving from state init -> state preparing
Thread-1839::INFO::2015-10-23 16:04:41,563::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)
Thread-1839::INFO::2015-10-23 16:04:41,563::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {}
Thread-1839::DEBUG::2015-10-23 16:04:41,563::task::1191::Storage.TaskManager.Task::(prepare) Task=`e649893f-f5b1-423a-a529-7def05ced600`::finished: {}
Thread-1839::DEBUG::2015-10-23 16:04:41,564::task::595::Storage.TaskManager.Task::(_updateState) Task=`e649893f-f5b1-423a-a529-7def05ced600`::moving from state preparing -> state finished
Thread-1839::DEBUG::2015-10-23 16:04:41,564::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-1839::DEBUG::2015-10-23 16:04:41,564::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1839::DEBUG::2015-10-23 16:04:41,564::task::993::Storage.TaskManager.Task::(_decref) Task=`e649893f-f5b1-423a-a529-7def05ced600`::ref 0 aborting False
Thread-1839::DEBUG::2015-10-23 16:04:41,566::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:41,574::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:41,575::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1840::DEBUG::2015-10-23 16:04:41,578::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:44,616::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:44,618::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1841::DEBUG::2015-10-23 16:04:44,621::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:47,628::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:47,630::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1842::DEBUG::2015-10-23 16:04:47,633::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:50,640::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:50,641::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1843::DEBUG::2015-10-23 16:04:50,643::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:53,650::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:53,652::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1844::DEBUG::2015-10-23 16:04:53,655::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:56,663::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:56,665::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1845::DEBUG::2015-10-23 16:04:56,667::task::595::Storage.TaskManager.Task::(_updateState) Task=`1b4d0d0d-7ea5-4f29-99b1-e5fa5448c632`::moving from state init -> state preparing
Thread-1845::INFO::2015-10-23 16:04:56,668::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)
Thread-1845::INFO::2015-10-23 16:04:56,668::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {}
Thread-1845::DEBUG::2015-10-23 16:04:56,668::task::1191::Storage.TaskManager.Task::(prepare) Task=`1b4d0d0d-7ea5-4f29-99b1-e5fa5448c632`::finished: {}
Thread-1845::DEBUG::2015-10-23 16:04:56,668::task::595::Storage.TaskManager.Task::(_updateState) Task=`1b4d0d0d-7ea5-4f29-99b1-e5fa5448c632`::moving from state preparing -> state finished
Thread-1845::DEBUG::2015-10-23 16:04:56,668::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-1845::DEBUG::2015-10-23 16:04:56,668::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1845::DEBUG::2015-10-23 16:04:56,669::task::993::Storage.TaskManager.Task::(_decref) Task=`1b4d0d0d-7ea5-4f29-99b1-e5fa5448c632`::ref 0 aborting False
Thread-1845::DEBUG::2015-10-23 16:04:56,671::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:56,678::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:56,679::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1846::DEBUG::2015-10-23 16:04:56,682::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:56,931::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:56,933::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1847::DEBUG::2015-10-23 16:04:56,933::__init__::481::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'Host.getDeviceList' in bridge with {u'storageType': 3}
Thread-1847::DEBUG::2015-10-23 16:04:56,937::task::595::Storage.TaskManager.Task::(_updateState) Task=`8706ee69-83be-4b2d-90dd-d1d4561fe8c9`::moving from state init -> state preparing
Thread-1847::INFO::2015-10-23 16:04:56,937::logUtils::44::dispatcher::(wrapper) Run and protect: getDeviceList(storageType=3, options={})
Thread-1847::DEBUG::2015-10-23 16:04:56,937::misc::741::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage)
Thread-1847::DEBUG::2015-10-23 16:04:56,937::misc::743::Storage.SamplingMethod::(__call__) Got in to sampling method
Thread-1847::DEBUG::2015-10-23 16:04:56,938::misc::741::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan)
Thread-1847::DEBUG::2015-10-23 16:04:56,938::misc::743::Storage.SamplingMethod::(__call__) Got in to sampling method
Thread-1847::DEBUG::2015-10-23 16:04:56,938::iscsi::424::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds
Thread-1847::DEBUG::2015-10-23 16:04:56,938::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)
Thread-1847::DEBUG::2015-10-23 16:04:56,961::misc::751::Storage.SamplingMethod::(__call__) Returning last result
Thread-1847::DEBUG::2015-10-23 16:04:56,961::misc::741::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan)
Thread-1847::DEBUG::2015-10-23 16:04:56,962::misc::743::Storage.SamplingMethod::(__call__) Got in to sampling method
Thread-1847::DEBUG::2015-10-23 16:04:56,962::hba::53::Storage.HBA::(rescan) Starting scan
Thread-1847::DEBUG::2015-10-23 16:04:56,962::utils::739::Storage.HBA::(execCmd) /usr/bin/sudo -n /usr/libexec/vdsm/fc-scan (cwd None)
Thread-1847::DEBUG::2015-10-23 16:04:57,002::hba::66::Storage.HBA::(rescan) Scan finished
Thread-1847::DEBUG::2015-10-23 16:04:57,002::misc::751::Storage.SamplingMethod::(__call__) Returning last result
Thread-1847::DEBUG::2015-10-23 16:04:57,003::multipath::131::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /sbin/multipath (cwd None)
Thread-1847::DEBUG::2015-10-23 16:04:57,186::multipath::131::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0
Thread-1847::DEBUG::2015-10-23 16:04:57,186::utils::739::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None)
Thread-1847::DEBUG::2015-10-23 16:04:57,196::utils::759::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
Thread-1847::DEBUG::2015-10-23 16:04:57,197::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1847::DEBUG::2015-10-23 16:04:57,197::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1847::DEBUG::2015-10-23 16:04:57,197::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1847::DEBUG::2015-10-23 16:04:57,198::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1847::DEBUG::2015-10-23 16:04:57,198::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1847::DEBUG::2015-10-23 16:04:57,198::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1847::DEBUG::2015-10-23 16:04:57,198::misc::751::Storage.SamplingMethod::(__call__) Returning last result
Thread-1847::DEBUG::2015-10-23 16:04:57,199::lvm::320::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex
Thread-1847::DEBUG::2015-10-23 16:04:57,202::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm pvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1 prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size (cwd None)
Thread-1847::DEBUG::2015-10-23 16:04:57,443::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0
Thread-1847::DEBUG::2015-10-23 16:04:57,444::lvm::348::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex
Thread-1847::DEBUG::2015-10-23 16:04:57,510::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm pvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup { retain_min = 50  retain_days = 0 } ' --test --metadatasize 128m --metadatacopies 2 --metadataignore y /dev/mapper/1IET_00010001 (cwd None)
Thread-1847::DEBUG::2015-10-23 16:04:57,610::lvm::291::Storage.Misc.excCmd::(cmd) FAILED: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n  TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.\n  Can\'t initialize physical volume "/dev/mapper/1IET_00010001" of volume group "24cd6134-8229-42ca-9fe1-489143d1d8d3" without -ff\n'; <rc> = 5
Thread-1847::DEBUG::2015-10-23 16:04:57,613::lvm::301::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm pvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1 prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --test --metadatasize 128m --metadatacopies 2 --metadataignore y /dev/mapper/1IET_00010001 (cwd None)
Thread-1847::DEBUG::2015-10-23 16:04:57,676::lvm::301::Storage.Misc.excCmd::(cmd) FAILED: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n  TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.\n  Can\'t initialize physical volume "/dev/mapper/1IET_00010001" of volume group "24cd6134-8229-42ca-9fe1-489143d1d8d3" without -ff\n'; <rc> = 5
Thread-1847::DEBUG::2015-10-23 16:04:57,676::lvm::861::Storage.LVM::(testPVCreate) rc: 5, out: [], err: ['  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!', '  TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.', '  Can\'t initialize physical volume "/dev/mapper/1IET_00010001" of volume group "24cd6134-8229-42ca-9fe1-489143d1d8d3" without -ff'], unusedDevs: set([]), usedDevs: set(['/dev/mapper/1IET_00010001'])
Thread-1847::INFO::2015-10-23 16:04:57,677::logUtils::47::dispatcher::(wrapper) Run and protect: getDeviceList, Return response: {'devList': [{'status': 'used', 'vendorID': 'IET', 'capacity': '588332924928', 'fwrev': '0001', 'vgUUID': '8gZEwa-3x5m-TiqA-uEPX-gC04-wkzx-PlaQDu', 'pathlist': [{'initiatorname': 'default', 'connection': '#.#.#.#', 'iqn': 'iqn.2015-10.N.N.N.lnx88:lnx88.target1', 'portal': '1', 'password': '******', 'port': '3260'}], 'logicalblocksize': '512', 'pathstatus': [{'physdev': 'sdi', 'type': 'iSCSI', 'state': 'active', 'lun': '1'}], 'devtype': 'iSCSI', 'physicalblocksize': '512', 'pvUUID': 'wpmBIM-tgc1-yKtH-XSwc-40wZ-Kn49-btwBFn', 'serial': 'SIET_VIRTUAL-DISK', 'GUID': '1IET_00010001', 'productID': 'VIRTUAL-DISK'}]}
Thread-1847::DEBUG::2015-10-23 16:04:57,677::task::1191::Storage.TaskManager.Task::(prepare) Task=`8706ee69-83be-4b2d-90dd-d1d4561fe8c9`::finished: {'devList': [{'status': 'used', 'fwrev': '0001', 'vgUUID': '8gZEwa-3x5m-TiqA-uEPX-gC04-wkzx-PlaQDu', 'pathlist': [{'connection': '#.#.#.#', 'iqn': 'iqn.2015-10.N.N.N.lnx88:lnx88.target1', 'portal': '1', 'port': '3260', 'initiatorname': 'default'}], 'logicalblocksize': '512', 'devtype': 'iSCSI', 'physicalblocksize': '512', 'serial': 'SIET_VIRTUAL-DISK', 'GUID': '1IET_00010001', 'productID': 'VIRTUAL-DISK', 'vendorID': 'IET', 'capacity': '588332924928', 'pathstatus': [{'physdev': 'sdi', 'type': 'iSCSI', 'state': 'active', 'lun': '1'}], 'pvUUID': 'wpmBIM-tgc1-yKtH-XSwc-40wZ-Kn49-btwBFn'}]}
Thread-1847::DEBUG::2015-10-23 16:04:57,678::task::595::Storage.TaskManager.Task::(_updateState) Task=`8706ee69-83be-4b2d-90dd-d1d4561fe8c9`::moving from state preparing -> state finished
Thread-1847::DEBUG::2015-10-23 16:04:57,678::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-1847::DEBUG::2015-10-23 16:04:57,678::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1847::DEBUG::2015-10-23 16:04:57,678::task::993::Storage.TaskManager.Task::(_decref) Task=`8706ee69-83be-4b2d-90dd-d1d4561fe8c9`::ref 0 aborting False
Thread-1847::DEBUG::2015-10-23 16:04:57,679::__init__::514::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Host.getDeviceList' in bridge with [{'status': 'used', 'vendorID': 'IET', 'capacity': '588332924928', 'fwrev': '0001', 'vgUUID': '8gZEwa-3x5m-TiqA-uEPX-gC04-wkzx-PlaQDu', 'pathlist': [{'connection': '#.#.#.#', 'iqn': 'iqn.2015-10.N.N.N.lnx88:lnx88.target1', 'portal': '1', 'port': '3260', 'initiatorname': 'default'}], 'logicalblocksize': '512', 'pathstatus': [{'type': 'iSCSI', 'physdev': 'sdi', 'deviceType': 'iSCSI', 'state': 'active', 'lun': '1'}], 'devtype': 'iSCSI', 'physicalblocksize': '512', 'pvUUID': 'wpmBIM-tgc1-yKtH-XSwc-40wZ-Kn49-btwBFn', 'serial': 'SIET_VIRTUAL-DISK', 'GUID': '1IET_00010001', 'productID': 'VIRTUAL-DISK'}]
Thread-1847::DEBUG::2015-10-23 16:04:57,679::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:59,721::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:59,723::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1848::DEBUG::2015-10-23 16:04:59,726::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:05:02,734::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:05:02,736::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1849::DEBUG::2015-10-23 16:05:02,739::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:05:05,746::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:05:05,747::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1850::DEBUG::2015-10-23 16:05:05,750::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:05:06,515::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:05:06,517::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1851::DEBUG::2015-10-23 16:05:06,517::__init__::481::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'LVMVolumeGroup.create' in bridge with {u'devlist': [u'1IET_00010001'], u'force': True, u'name': u'cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19'}
Thread-1851::DEBUG::2015-10-23 16:05:06,519::task::595::Storage.TaskManager.Task::(_updateState) Task=`2148889f-5651-400b-98da-56e58614342c`::moving from state init -> state preparing
Thread-1851::INFO::2015-10-23 16:05:06,519::logUtils::44::dispatcher::(wrapper) Run and protect: createVG(vgname=u'cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19', devlist=[u'1IET_00010001'], force=True, options=None)
Thread-1851::DEBUG::2015-10-23 16:05:06,623::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm pvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup { retain_min = 50  retain_days = 0 } ' -y -ff --metadatasize 128m --metadatacopies 2 --metadataignore y /dev/mapper/1IET_00010001 (cwd None)
Thread-1851::DEBUG::2015-10-23 16:05:06,866::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n  WARNING: Forcing physical volume creation on /dev/mapper/1IET_00010001 of volume group "24cd6134-8229-42ca-9fe1-489143d1d8d3"\n'; <rc> = 0
Thread-1851::DEBUG::2015-10-23 16:05:06,868::lvm::492::Storage.OperationMutex::(_invalidatepvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1851::DEBUG::2015-10-23 16:05:06,871::lvm::495::Storage.OperationMutex::(_invalidatepvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1851::DEBUG::2015-10-23 16:05:06,876::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm pvchange --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup { retain_min = 50  retain_days = 0 } ' --metadataignore n /dev/mapper/1IET_00010001 (cwd None)
Thread-1851::DEBUG::2015-10-23 16:05:07,021::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0
Thread-1851::DEBUG::2015-10-23 16:05:07,021::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup { retain_min = 50  retain_days = 0 } ' --physicalextentsize 128m --addtag RHAT_storage_domain_UNREADY cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 /dev/mapper/1IET_00010001 (cwd None)
Thread-1851::DEBUG::2015-10-23 16:05:07,294::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0
Thread-1851::DEBUG::2015-10-23 16:05:07,295::lvm::492::Storage.OperationMutex::(_invalidatepvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1851::DEBUG::2015-10-23 16:05:07,295::lvm::495::Storage.OperationMutex::(_invalidatepvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1851::DEBUG::2015-10-23 16:05:07,296::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1851::DEBUG::2015-10-23 16:05:07,296::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1851::DEBUG::2015-10-23 16:05:07,296::lvm::932::Storage.LVM::(createVG) Cache after createvg {u'cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19': Stub(name=u'cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19', stale=True)}
Thread-1851::DEBUG::2015-10-23 16:05:07,296::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex
Thread-1851::DEBUG::2015-10-23 16:05:07,297::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1 prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 (cwd None)
Thread-1851::DEBUG::2015-10-23 16:05:07,374::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0
Thread-1851::DEBUG::2015-10-23 16:05:07,375::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex
Thread-1851::INFO::2015-10-23 16:05:07,375::logUtils::47::dispatcher::(wrapper) Run and protect: createVG, Return response: {'uuid': 'dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P'}
Thread-1851::DEBUG::2015-10-23 16:05:07,376::task::1191::Storage.TaskManager.Task::(prepare) Task=`2148889f-5651-400b-98da-56e58614342c`::finished: {'uuid': 'dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P'}
Thread-1851::DEBUG::2015-10-23 16:05:07,376::task::595::Storage.TaskManager.Task::(_updateState) Task=`2148889f-5651-400b-98da-56e58614342c`::moving from state preparing -> state finished
Thread-1851::DEBUG::2015-10-23 16:05:07,376::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-1851::DEBUG::2015-10-23 16:05:07,376::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1851::DEBUG::2015-10-23 16:05:07,376::task::993::Storage.TaskManager.Task::(_decref) Task=`2148889f-5651-400b-98da-56e58614342c`::ref 0 aborting False
Thread-1851::DEBUG::2015-10-23 16:05:07,377::__init__::514::jsonrpc.JsonRpcServer::(_serveRequest) Return 'LVMVolumeGroup.create' in bridge with dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P
Thread-1851::DEBUG::2015-10-23 16:05:07,377::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:05:07,392::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
Thread-1852::DEBUG::2015-10-23 16:05:07,393::__init__::481::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'StorageDomain.create' in bridge with {u'name': u'lnx88', u'domainType': 3, u'domainClass': 1, u'typeArgs': u'dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P', u'version': u'3', u'storagedomainID': u'cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19'}
JsonRpcServer::DEBUG::2015-10-23 16:05:07,393::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1852::DEBUG::2015-10-23 16:05:07,397::task::595::Storage.TaskManager.Task::(_updateState) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::moving from state init -> state preparing
Thread-1852::INFO::2015-10-23 16:05:07,397::logUtils::44::dispatcher::(wrapper) Run and protect: createStorageDomain(storageType=3, sdUUID=u'cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19', domainName=u'lnx88', typeSpecificArg=u'dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P', domClass=1, domVersion=u'3', options=None)
Thread-1852::ERROR::2015-10-23 16:05:07,397::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19
Thread-1852::ERROR::2015-10-23 16:05:07,398::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19
Thread-1852::ERROR::2015-10-23 16:05:07,400::sdc::143::Storage.StorageDomainCache::(_findDomain) domain cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 not found
Traceback (most recent call last):
 File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
   dom = findMethod(sdUUID)
 File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain
   raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: (u'cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19',)
Thread-1852::INFO::2015-10-23 16:05:07,401::blockSD::484::Storage.StorageDomain::(create) sdUUID=cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 domainName=lnx88 domClass=1 vgUUID=dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P storageType=3 version=3
Thread-1852::DEBUG::2015-10-23 16:05:07,401::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,401::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1 prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:07,477::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:07,477::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,478::lvm::428::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,478::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup { retain_min = 50  retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:07,534::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:07,534::lvm::463::Storage.LVM::(_reloadlvs) lvs reloaded
Thread-1852::DEBUG::2015-10-23 16:05:07,535::lvm::463::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex
Thread-1852::INFO::2015-10-23 16:05:07,535::blockSD::468::Storage.StorageDomain::(metaSize) size 512 MB (metaratio 262144)
Thread-1852::DEBUG::2015-10-23 16:05:07,535::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup { retain_min = 50  retain_days = 0 } ' --autobackup n --contiguous n --size 512m --name metadata cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:07,652::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n  WARNING: This metadata update is NOT backed up\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:07,653::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,653::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,653::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,654::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,654::lvm::1081::Storage.Misc.excCmd::(createLV) /usr/bin/sudo -n /usr/bin/chown vdsm:qemu /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:07,674::lvm::1081::Storage.Misc.excCmd::(createLV) SUCCESS: <err> = ''; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:07,675::lvm::428::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,676::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1 prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:07,747::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:07,748::lvm::463::Storage.LVM::(_reloadlvs) lvs reloaded
Thread-1852::DEBUG::2015-10-23 16:05:07,748::lvm::463::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,749::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,749::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1 prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:07,831::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:07,832::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex
Thread-1852::INFO::2015-10-23 16:05:07,832::blockSD::653::Storage.StorageDomain::(getMetaDataMapping) Create: SORT MAPPING: ['/dev/mapper/1IET_00010001']
Thread-1852::DEBUG::2015-10-23 16:05:07,832::lvm::320::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,833::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm pvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1 prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size /dev/mapper/1IET_00010001 (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:07,899::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:07,899::lvm::348::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,900::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1  prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup { retain_min = 50  retain_days = 0 } ' --autobackup n --contiguous n --size 16m --name outbox cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:08,021::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n  WARNING: This metadata update is NOT backed up\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:08,021::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,022::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,022::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,022::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,023::lvm::1081::Storage.Misc.excCmd::(createLV) /usr/bin/sudo -n /usr/bin/chown vdsm:qemu /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/outbox (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:08,042::lvm::1081::Storage.Misc.excCmd::(createLV) SUCCESS: <err> = ''; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:08,042::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1 prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --autobackup n --contiguous n --size 2048m --name leases cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:08,205::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n  WARNING: This metadata update is NOT backed up\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:08,206::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,206::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,207::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,207::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,207::lvm::1081::Storage.Misc.excCmd::(createLV) /usr/bin/sudo -n /usr/bin/chown vdsm:qemu /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/leases (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:08,227::lvm::1081::Storage.Misc.excCmd::(createLV) SUCCESS: <err> = ''; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:08,228::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1 prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --autobackup n --contiguous n --size 8m --name ids cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:08,353::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n  WARNING: This metadata update is NOT backed up\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:08,354::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,354::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,354::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,355::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,355::lvm::1081::Storage.Misc.excCmd::(createLV) /usr/bin/sudo -n /usr/bin/chown vdsm:qemu /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/ids (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:08,374::lvm::1081::Storage.Misc.excCmd::(createLV) SUCCESS: <err> = ''; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:08,375::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1 prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --autobackup n --contiguous n --size 16m --name inbox cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:08,547::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n  WARNING: This metadata update is NOT backed up\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:08,548::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,548::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,549::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,549::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,549::lvm::1081::Storage.Misc.excCmd::(createLV) /usr/bin/sudo -n /usr/bin/chown vdsm:qemu /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/inbox (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:08,570::lvm::1081::Storage.Misc.excCmd::(createLV) SUCCESS: <err> = ''; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:08,571::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1 prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --autobackup n --contiguous n --size 1024m --name master cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:08,744::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n  WARNING: This metadata update is NOT backed up\nWARNING: ext3 signature detected on /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/master at offset 1080. Wipe it? [y/n]: n\n  Aborted wiping of ext3.\n  1 existing signature left on the device.\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:08,745::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,745::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,745::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,746::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,746::lvm::1081::Storage.Misc.excCmd::(createLV) /usr/bin/sudo -n /usr/bin/chown vdsm:qemu /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/master (cwd None)
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:05:08,759::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:05:08,761::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1854::DEBUG::2015-10-23 16:05:08,764::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
Thread-1852::DEBUG::2015-10-23 16:05:08,768::lvm::1081::Storage.Misc.excCmd::(createLV) SUCCESS: <err> = ''; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:08,768::blockSD::1342::Storage.Misc.excCmd::(_createVMSfs) /usr/bin/sudo -n /usr/sbin/mkfs -q -j -E nodiscard /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/master (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:10,184::blockSD::1342::Storage.Misc.excCmd::(_createVMSfs) SUCCESS: <err> = ''; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:10,185::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvchange --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] }  global {  locking_type=1 prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }  backup {  retain_min = 50  retain_days = 0 } ' --autobackup n --available n cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/master (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:10,339::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '  WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:10,340::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:10,340::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:10,341::blockSD::535::Storage.Misc.excCmd::(create) /usr/bin/nice -n 19 /usr/bin/ionice -c 3 /usr/bin/dd if=/dev/zero of=/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata bs=1048576 seek=0 skip=0 conv=notrunc count=40 oflag=direct (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:10,351::blockSD::535::Storage.Misc.excCmd::(create) FAILED: <err> = "/usr/bin/dd: failed to open '/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata': Permission denied\n"; <rc> = 1
Thread-1852::ERROR::2015-10-23 16:05:10,351::task::866::Storage.TaskManager.Task::(_setError) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::Unexpected error
Traceback (most recent call last):
 File "/usr/share/vdsm/storage/task.py", line 873, in _run
   return fn(*args, **kargs)
 File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
   res = f(*args, **kwargs)
 File "/usr/share/vdsm/storage/hsm.py", line 2687, in createStorageDomain
   domVersion)
 File "/usr/share/vdsm/storage/blockSD.py", line 543, in create
   raise se.VolumesZeroingError(path)
VolumesZeroingError: Cannot zero out volume: ('/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata',)
Thread-1852::DEBUG::2015-10-23 16:05:10,352::task::885::Storage.TaskManager.Task::(_run) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::Task._run: f0b046fd-009c-4437-95c8-b5648808bc72 (3, u'cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19', u'lnx88', u'dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P', 1, u'3') {} failed - stopping task
Thread-1852::DEBUG::2015-10-23 16:05:10,352::task::1217::Storage.TaskManager.Task::(stop) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::stopping in state preparing (force False)
Thread-1852::DEBUG::2015-10-23 16:05:10,352::task::993::Storage.TaskManager.Task::(_decref) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::ref 1 aborting True
Thread-1852::INFO::2015-10-23 16:05:10,352::task::1171::Storage.TaskManager.Task::(prepare) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::aborting: Task is aborted: 'Cannot zero out volume' - code 374
Thread-1852::DEBUG::2015-10-23 16:05:10,353::task::1176::Storage.TaskManager.Task::(prepare) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::Prepare: aborted: Cannot zero out volume
Thread-1852::DEBUG::2015-10-23 16:05:10,353::task::993::Storage.TaskManager.Task::(_decref) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::ref 0 aborting True
Thread-1852::DEBUG::2015-10-23 16:05:10,353::task::928::Storage.TaskManager.Task::(_doAbort) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::Task._doAbort: force False
Thread-1852::DEBUG::2015-10-23 16:05:10,353::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1852::DEBUG::2015-10-23 16:05:10,353::task::595::Storage.TaskManager.Task::(_updateState) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::moving from state preparing -> state aborting
Thread-1852::DEBUG::2015-10-23 16:05:10,354::task::550::Storage.TaskManager.Task::(__state_aborting) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::_aborting: recover policy none
Thread-1852::DEBUG::2015-10-23 16:05:10,354::task::595::Storage.TaskManager.Task::(_updateState) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::moving from state aborting -> state failed
Thread-1852::DEBUG::2015-10-23 16:05:10,354::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-1852::DEBUG::2015-10-23 16:05:10,354::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1852::ERROR::2015-10-23 16:05:10,354::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': {'message': "Cannot zero out volume: ('/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata',)", 'code': 374}}
Thread-1852::DEBUG::2015-10-23 16:05:10,355::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:05:11,771::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:05:11,773::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1855::DEBUG::2015-10-23 16:05:11,776::task::595::Storage.TaskManager.Task::(_updateState) Task=`23187509-3a17-4bf0-bc7a-dcaa97ed4cdf`::moving from state init -> state preparing
Thread-1855::INFO::2015-10-23 16:05:11,776::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)
Thread-1855::INFO::2015-10-23 16:05:11,776::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {}
Thread-1855::DEBUG::2015-10-23 16:05:11,776::task::1191::Storage.TaskManager.Task::(prepare) Task=`23187509-3a17-4bf0-bc7a-dcaa97ed4cdf`::finished: {}
Thread-1855::DEBUG::2015-10-23 16:05:11,777::task::595::Storage.TaskManager.Task::(_updateState) Task=`23187509-3a17-4bf0-bc7a-dcaa97ed4cdf`::moving from state preparing -> state finished
Thread-1855::DEBUG::2015-10-23 16:05:11,777::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-1855::DEBUG::2015-10-23 16:05:11,777::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1855::DEBUG::2015-10-23 16:05:11,777::task::993::Storage.TaskManager.Task::(_decref) Task=`23187509-3a17-4bf0-bc7a-dcaa97ed4cdf`::ref 0 aborting False
Thread-1855::DEBUG::2015-10-23 16:05:11,779::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:05:11,786::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:05:11,789::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1856::DEBUG::2015-10-23 16:05:11,791::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:05:14,845::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:05:14,847::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1857::DEBUG::2015-10-23 16:05:14,850::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
------

Comment 1 Tal Nisan 2015-10-27 14:47:33 UTC
Hi Devin,
Can you please attach full logs from the engine and the SPM host? Snippets are not always as effective

Comment 2 Devin Bougie 2015-10-27 16:20:18 UTC
Created attachment 1086913 [details]
engine.log

Comment 3 Devin Bougie 2015-10-27 16:21:05 UTC
Created attachment 1086915 [details]
vdsm.log

Comment 4 Devin Bougie 2015-10-27 16:23:20 UTC
Hi Tal,

engine.log and vdsm.log have now been attached.  Please not that these are both running on the same host, although I get the same error if trying to create the iscsi domain from another.

I tried adding the domain again today, if that helps narrow down where to look in the logs.  

Just incase it helps, the storage domain is not listed in the overt-engine gui (web interface) or in ovirt-shell.  However, when I try to add it again, it first says "The following LUNs are already in use: ...," but if I "Approve operation" I get the same "Cannot zero out volume" error.

If I try to import, I can log into the target but it doesn't show any "Storage Name / Storage ID (VG Name)" to import.

Please let me know if there is anything more I can provide or anything else I can do to help.

Thanks,
Devin

Comment 5 Ala Hino 2015-10-28 13:13:38 UTC
Hi Devin,

Can you please run the following commands and send us the output?

$ ls -lhZ `realpath /dev/d2198dd8-ab81-4c3f-b2a7-21dc6b2e0a10/*` (note the ` character, it is not '. You copy&paste the cmd)

$ cat /usr/lib/udev/rules.d/12-vdsm-lvm.rules

$ ls /etc/udev/rules.d/

$ cat /etc/udev/rules.d/12-vdsm-lvm.rules (file may not exists)

Thanks,
Ala

Comment 6 Devin Bougie 2015-10-28 13:31:18 UTC
Hi Ala,

Here you go.  Please let me know if the id in the first command may have changed (I keep testing different combinations of hosts, etc.).

Thanks!
Devin

[root@lnx84 ~]# ls -lhZ `realpath /dev/d2198dd8-ab81-4c3f-b2a7-21dc6b2e0a10/*`
brw------- root qemu    ?                                /dev/dm-20
brw------- root qemu    ?                                /dev/dm-21
brw-rw---- root sanlock ?                                /dev/dm-22
brw-rw---- root sanlock ?                                /dev/dm-23
brw------- root qemu    ?                                /dev/dm-24
[root@lnx84 ~]# cat /usr/lib/udev/rules.d/12-vdsm-lvm.rules
#
# Copyright 2010 Red Hat, Inc. and/or its affiliates.
#
# Licensed to you under the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.  See the files README and
# LICENSE_GPL_v2 which accompany this distribution.
#

# Udev rules for LVM.
#
# These rules create symlinks for LVM logical volumes in
# /dev/VG directory (VG is an actual VG name). Some udev
# environment variables are set (they can be used in later
# rules as well):
#   DM_LV_NAME - logical volume name
#   DM_VG_NAME - volume group name
#   DM_LV_LAYER - logical volume layer (blank if not set)

# "add" event is processed on coldplug only, so we need "change", too.
ACTION!="add|change", GOTO="lvm_end"

# Volumes used as vdsm images
# WARNING: we cannot use OWNER, GROUP and MODE since using any of them will
# change the selinux label to the default, causing vms to pause after extending
# disks. https://bugzilla.redhat.com/1147910
ENV{DM_VG_NAME}=="[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]", ENV{DM_LV_NAME}=="[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]", RUN+="/usr/bin/chown vdsm:qemu $env{DEVNAME}", GOTO="lvm_end"

# Other volumes used by vdsm
ENV{DM_VG_NAME}=="[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]", ENV{DM_LV_NAME}=="[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]_MERGE", OWNER:="vdsm", GROUP:="qemu", GOTO="lvm_end"
ENV{DM_VG_NAME}=="[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]", ENV{DM_LV_NAME}=="_remove_me_[a-zA-Z0-9][a-zA-Z0-9][a-zA-Z0-9][a-zA-Z0-9][a-zA-Z0-9][a-zA-Z0-9][a-zA-Z0-9][a-zA-Z0-9]_[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]", OWNER:="vdsm", GROUP:="qemu", GOTO="lvm_end"
ENV{DM_VG_NAME}=="[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]", ENV{DM_LV_NAME}=="metadata", MODE:="0600", OWNER:="vdsm", GROUP:="qemu", GOTO="lvm_end"
ENV{DM_VG_NAME}=="[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]", ENV{DM_LV_NAME}=="ids", MODE:="0660", OWNER:="vdsm", GROUP:="sanlock", GOTO="lvm_end"
ENV{DM_VG_NAME}=="[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]", ENV{DM_LV_NAME}=="inbox", MODE:="0600", OWNER:="vdsm", GROUP:="qemu", GOTO="lvm_end"
ENV{DM_VG_NAME}=="[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]", ENV{DM_LV_NAME}=="outbox", MODE:="0600", OWNER:="vdsm", GROUP:="qemu", GOTO="lvm_end"
ENV{DM_VG_NAME}=="[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9]-[a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9][a-f0-9]", ENV{DM_LV_NAME}=="leases", MODE:="0660", OWNER:="vdsm", GROUP:="sanlock", GOTO="lvm_end"

# FIXME: make special lvs vdsm-only readable, MODE doesn't work on rhel6

LABEL="lvm_end"
[root@lnx84 ~]# ls /etc/udev/rules.d/
12-ovirt-iosched.rules  60-ipath.rules  70-persistent-ipoib.rules
[root@lnx84 ~]# cat /etc/udev/rules.d/12-vdsm-lvm.rules 
cat: /etc/udev/rules.d/12-vdsm-lvm.rules: No such file or directory

Comment 7 Ala Hino 2015-10-29 05:18:01 UTC
Oved,

Could you please check whether we have any infra issues here?

Thanks.

Comment 8 Ala Hino 2015-10-29 06:26:41 UTC
(In reply to Ala Hino from comment #7)
> Oved,
> 
> Could you please check whether we have any infra issues here?
> 
> Thanks.

To be more specific, seems like there is an installation issue.

Comment 9 Oved Ourfali 2015-10-29 06:29:21 UTC
Yaniv, can you take a look?

Comment 10 Ala Hino 2015-10-29 06:34:36 UTC
One more piece of info, there is a permission denied issue:

Thread-33390::DEBUG::2015-10-27 12:18:50,223::blockSD::535::Storage.Misc.excCmd::(create) /usr/bin/nice -n 19 /usr/bin/ionice -c 3 /usr/bin/dd if=/dev/zero of=/dev/d2198dd8-ab81-4c3f-b2a7-21dc6b2e0a10/metadata bs=1048576 seek=0 skip=0 conv=notrunc count=40 oflag=direct (cwd None)
Thread-33390::DEBUG::2015-10-27 12:18:50,235::blockSD::535::Storage.Misc.excCmd::(create) FAILED: <err> = "/usr/bin/dd: failed to open '/dev/d2198dd8-ab81-4c3f-b2a7-21dc6b2e0a10/metadata': Permission denied\n"; <rc> = 1

Comment 11 Ala Hino 2015-10-29 08:15:14 UTC
Hi Devin,

Could you please run following commands:

$ cat /etc/passwd | grep vdsm

$ cat /etc/group | grep vdsm

Comment 12 Yaniv Bronhaim 2015-10-29 09:33:38 UTC
Vdsm installs vdsm-lvm.rules file (you have it in spec) you should figure how it is gone in this env ... last commit regard this area is http://gerrit.ovirt.org/30869 - check if something changed

Comment 13 Oved Ourfali 2015-10-29 13:38:26 UTC
Restoring needinfo on Devin.

Comment 14 Devin Bougie 2015-10-29 14:11:46 UTC
Hi Ala,

The vdsm user is in our LDAP domain, so it's not in the local /etc/passwd.  

[root@lnx84 ~]# id vdsm
uid=36(vdsm) gid=36(kvm) groups=36(kvm),107(qemu),106(sanlock)

[root@lnx84 ~]# cat /etc/passwd | grep vdsm

[root@lnx84 ~]# cat /etc/group | grep vdsm
kvm:x:36:qemu,sanlock,vdsm
qemu:x:107:vdsm,sanlock
sanlock:x:106:sanlock,vdsm

As for vdsm-lvm.rules, I don't see any packages that provide this.  Should I try creating one manually?

[root@lnx84 ~]# yum provides */vdsm-lvm.rules
Loaded plugins: langpacks, versionlock
No matches found

Thanks,
Devin

Comment 15 Devin Bougie 2015-10-29 14:48:45 UTC
Sorry, to be clear /usr/lib/udev/rules.d/12-vdsm-lvm.rules exists, but there is no /etc/udev/rules.d/12-vdsm-lvm.rules.  A `yum provides */*vdsm-lvm.rules` doesn't show anything that provides an /etc/udev/rules.d/*vdsm-lvm.rules

Thanks,
Devin

Comment 16 Ala Hino 2015-11-02 15:51:01 UTC
Hi Devin,

Could you please do the following for me?

1) We want to see all installed (not only vdsm related) packages version. Please run the following command and send the output:
   $ rpm -qa 

2) We want to see udev logs when creating the storage domain:
a. Enable udev debug log by running: 
   $ udevadm control --log-priority=debug

b. Write a marker to the log before creating the storage domain. Please run:
   $ logger "BEFORE CREATING SCSI STORAGE DOMAIN"

c. Create the storage domain

d. Write a marker to the log to indicate end of creating the storage domain:
   $ logger "AFTER CREATING SCSI STORAGE DOMAIN"

Please send us /var/log/messages

Thank you,
Ala

Comment 17 Devin Bougie 2015-11-02 17:16:48 UTC
Created attachment 1088660 [details]
installed packages

Here's the output of rpm -qa.

Thanks!
Devin

Comment 18 Devin Bougie 2015-11-02 17:17:33 UTC
Created attachment 1088661 [details]
udev logs

And, here are the udev logs.

Thanks again,
Devin

Comment 19 Ala Hino 2015-11-03 15:57:51 UTC
Somehow I don't see udev logs.
Could you please do the following?
I apologize for asking too much.

1) Enable udev deb logs:
   $ udevadm control --log-priority=debug

2) Redirect journalctl to a temp file and send the file:
   $ journalctl -f > temp.log

If possible, please do step #2 just before creating the domain and stop it immediately after the failure, this way I can focus on the logs related to creating the domain.

Thank you,
Ala

Comment 20 Devin Bougie 2015-11-03 16:41:00 UTC
Created attachment 1089132 [details]
udev logs

Comment 21 Devin Bougie 2015-11-03 16:43:34 UTC
Hi Ala,

I've updated the "udev logs" attachment, this time with the actual logs.  

Thanks for taking a look!
Devin

Comment 22 Ala Hino 2015-11-04 10:01:59 UTC
Thanks Devin.

Seems like owner of few directories is set to root rather than to vdsm.

For the sake of the test, cold you please add a vdsm user rather than using the one in ldap? It should have the following info:

uid=36(vdsm) gid=36(kvm) groups=36(kvm),179(sanlock),107(qemu)

After adding the user, please try to create the storage domain again.

Comment 23 Devin Bougie 2015-11-04 14:48:29 UTC
Thanks, Ala.  Unfortunately nothing changed after creating a local vdsm user:

------
[root@lnx84 ~]# grep vdsm /etc/passwd
vdsm:x:36:36::/home/vdsm:/bin/bash

[root@lnx84 ~]# grep vdsm /etc/group
kvm:x:36:qemu,sanlock,vdsm
qemu:x:107:vdsm,sanlock
sanlock:x:106:sanlock,vdsm,qemu,vdsm

[root@lnx84 ~]# id vdsm
uid=36(vdsm) gid=36(kvm) groups=36(kvm),107(qemu),106(sanlock)
------

Any more suggestions would be greatly appreciated.

Thanks,
Devin

Comment 24 Ala Hino 2015-11-04 15:54:42 UTC
One thing is obvious, something is bad with udev working with vdsm user.

Could you please perform the following:

1) Reload udev rules. Please run:

   $ udevadm control --reload

   Try to create the domain and see if any better. If not ...

2) Restart udev service:

   $ systemctl restart systemd-udevd.service

   Try now to create the domain.

Comment 25 Devin Bougie 2015-11-04 17:23:31 UTC
Hi Ala,

After "udevadm control --reload", the error changed to "Error while executing action Attach Storage Domain: Internal block device read failure".  However, the domain remained in the gui with a status of "Unattached," and I was eventually able to attach and activate it.  Thank you!

Unless you have any other thoughts on how to avoid this in the future, I will remember to try "udevadm control --reload" the next time this happens.

Thanks again,
Devin

Comment 26 Ala Hino 2015-11-04 20:32:33 UTC
Good news!

To help us better understand the issue, could you please elaborate more on your deployment - mainly, explain how you installed oVirt, how you setup your environment to work with ldap, when vdsm user created in ldap, etc.

In addition, could you please remove vdsm local user, reload udev rules and see if you are still able to create domains or you get the old original again?

Thank you,
Ala

Comment 27 Devin Bougie 2015-11-06 19:35:08 UTC
Hi Ala,

I installed ovirt following the quick start guide (install ovirt-release, install ovirt-engine, run engine-setup).

This was on top of an EL7.1 host already bound to our AD domain and configured to use sssd for nss, pam, and autofs, and our nsswitch.conf has "files sss" for passwd, shadow, group, initgroups, and automount.  The vdsm user had been created in AD some time ago - before the OS was installed on any of our ovirt hosts.

It does look like vdsmd requires a local vdsm user at some point in the process, but it's not really consistent.  
- I setup a clean host without vdsm in /etc/passwd, and all of the /dev/dm-* devices are owned by root.  That host could not join the iSCSI storage domain.
- I manually changed the ownership of the appropriate devices to vdsm, and everything started working.  However, after a "systemctl restart vdsmd", all of the dm- devices would change back to root.
- I created a local vdsm user, restarted vdsmd, and the device ownership changed to vdsm.
- I deleted the local vdsm user, restarted vdsmd, ran "udevadm control --reload", and rebooted.  All of the devices continued to be owned by vdsm.

I'm sorry that's probably not very helpful.  Please let me know if you have any more questions or requests.

Thanks,
Devin

Comment 28 Ala Hino 2015-11-10 06:36:46 UTC
Indeed, vdsm looks for a local user and if not found it creates it, otherwise a local user is not created.

I will close this one as i don't think we have an issue here. I will also try to investigate deeper on an environment where we have vdsm defined in ldap rather than locall created by vdsm and see if this is supported.

Thank you,
Ala

Comment 29 Allon Mureinik 2015-11-10 07:01:25 UTC
(In reply to Ala Hino from comment #28)
> Indeed, vdsm looks for a local user and if not found it creates it,
> otherwise a local user is not created.
> 
> I will close this one as i don't think we have an issue here. I will also
> try to investigate deeper on an environment where we have vdsm defined in
> ldap rather than locall created by vdsm and see if this is supported.

Ala, let's document this behavior please.

Comment 30 Ala Hino 2015-11-10 07:14:01 UTC
Done


Note You need to log in before you can comment on or make changes to this bug.