Bug 856240 - after remove of domain some luns appear as used because device mapper leftovers
after remove of domain some luns appear as used because device mapper leftovers
Status: CLOSED DUPLICATE of bug 1059757
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm (Show other bugs)
unspecified
x86_64 Linux
medium Severity medium
: ---
: 3.4.0
Assigned To: Sergey Gotliv
Aharon Canan
storage
: Reopened, Triaged
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-09-11 10:37 EDT by Dafna Ron
Modified: 2016-02-10 12:48 EST (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-02-10 04:43:12 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
log (2.32 MB, application/x-xz)
2012-09-11 10:37 EDT, Dafna Ron
no flags Details
vgremove (771.48 KB, application/x-xz)
2012-09-16 03:27 EDT, Dafna Ron
no flags Details
logs-2.2.14 (2.15 MB, application/x-gzip)
2014-02-02 08:43 EST, Elad
no flags Details
logs-9.2.14 (682.09 KB, application/x-gzip)
2014-02-09 08:01 EST, Elad
no flags Details

  None (edit)
Description Dafna Ron 2012-09-11 10:37:27 EDT
Created attachment 611796 [details]
log

Description of problem:

after I removed domains I wanted to create new domains from the same luns. 
some of the luns appear as used when we discover although they were removed properly. 
geDeviceList will show the device as used since vdsm used pvcreate to check if the device is busy and will get the following: [root@gold-vdsd tmp]# pvcreate /dev/mapper/3514f0c5695800458
Can't open /dev/mapper/3514f0c5695800458 exclusively. Mounted filesystem? 

running lsblk will show that there are device mapper leftovers: 

[root@gold-vdsd tmp]# lsblk /dev/mapper/3514f0c5695800458
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
3514f0c5695800458 (dm-59) 253:59 0 50G 0 mpath
├─5c397bae--486c--4b16--b263--eba3dcc7f19f-metadata (dm-65) 253:65 0 512M 0 lvm
├─5c397bae--486c--4b16--b263--eba3dcc7f19f-ids (dm-66) 253:66 0 128M 0 lvm
├─5c397bae--486c--4b16--b263--eba3dcc7f19f-leases (dm-67) 253:67 0 2G 0 lvm
├─5c397bae--486c--4b16--b263--eba3dcc7f19f-inbox (dm-86) 253:86 0 128M 0 lvm
├─5c397bae--486c--4b16--b263--eba3dcc7f19f-outbox (dm-87) 253:87 0 128M 0 lvm
└─5c397bae--486c--4b16--b263--eba3dcc7f19f-master (dm-88) 253:88 0 1G 0 lvm 

after cleaning the leftovers we can now create a pv which means that the lun will not appear as used: 

[root@gold-vdsd tmp]# dmsetup remove 5c397bae--486c--4b16--b263--eba3dcc7f19f-metadata 5c397bae--486c--4b16--b263--eba3dcc7f19f-ids 5c397bae--486c--4b16--b263--eba3dcc7f19f-leases 5c397bae--486c--4b16--b263--eba3dcc7f19f-inbox 5c397bae--486c--4b16--b263--eba3dcc7f19f-outbox 5c397bae--486c--4b16--b263--eba3dcc7f19f-master
[root@gold-vdsd tmp]# pvcreate /dev/mapper/3514f0c5695800458
Writing physical volume data to disk "/dev/mapper/3514f0c5695800458"
Physical volume "/dev/mapper/3514f0c5695800458" successfully created 

Version-Release number of selected component (if applicable):

vdsm-4.9.6-32.0.el6_3.x86_64
si17

How reproducible:

unknown since this issue appears randomly. 

Steps to Reproduce:
1. remove several domains one after the other
2. new domain -> discover 
3.
  
Actual results:

some luns will appear as used because when we removed them we did not clean everything from dev mapper

Expected results: attaching vdsm logs which shows we removed all domains 


we should clean dev mapper for all luns

Additional info:


[root@gold-vdsd tmp]# pvcreate /dev/mapper/3514f0c5695800443
Writing physical volume data to disk "/dev/mapper/3514f0c5695800443"
Physical volume "/dev/mapper/3514f0c5695800443" successfully created
[root@gold-vdsd tmp]# vdsClient -s 0 getDeviceList
[{'GUID': '3514f0c5695800444',
'capacity': '107374182400',
'devtype': 'iSCSI',
'fwrev': '1.0',
'logicalblocksize': '512',
'pathlist': [{'connection': '10.35.160.7',
'initiatorname': 'default',
'iqn': 'iqn.2008-05.com.xtremio:001b21b545c0',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '3',
'physdev': 'sde',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'XtremApp',
'pvUUID': '224UaD-2iIb-Qyus-WZth-bvLa-5TFs-qB5dD0',
'serial': 'SXtremIO_XtremApp_50151775650e4da5a22ba5845261ca4a',
'status': 'free',
'vendorID': 'XtremIO',
'vgUUID': ''},
{'GUID': '3514f0c5695800446',
'capacity': '107374182400',
'devtype': 'iSCSI',
'fwrev': '1.0',
'logicalblocksize': '512',
'pathlist': [{'connection': '10.35.160.7',
'initiatorname': 'default',
'iqn': 'iqn.2008-05.com.xtremio:001b21b545c0',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '5',
'physdev': 'sdg',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'XtremApp',
'pvUUID': 'v0jLv2-aAMO-VBjt-9ROR-qCtw-CLVo-lVQEge',
'serial': 'SXtremIO_XtremApp_6cbb6b5746de4c2893a82598dc62e535',
'status': 'used',
'vendorID': 'XtremIO',
'vgUUID': ''},
{'GUID': '3514f0c5695800442',
'capacity': '107374182400',
'devtype': 'iSCSI',
'fwrev': '1.0',
'logicalblocksize': '512',
'pathlist': [{'connection': '10.35.160.7',
'initiatorname': 'default',
'iqn': 'iqn.2008-05.com.xtremio:001b21b545c0',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '1',
'physdev': 'sdc',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'XtremApp',
'pvUUID': 'xqde4m-sMme-Vadm-Fw1R-YAW0-REBy-Yp8CUT',
'serial': 'SXtremIO_XtremApp_939f98850330423c9ed1039ee39fee22',
'status': 'used',
'vendorID': 'XtremIO',
'vgUUID': ''},
{'GUID': '3514f0c5695800447',
'capacity': '107374182400',
'devtype': 'iSCSI',
'fwrev': '1.0',
'logicalblocksize': '512',
'pathlist': [{'connection': '10.35.160.7',
'initiatorname': 'default',
'iqn': 'iqn.2008-05.com.xtremio:001b21b545c0',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '6',
'physdev': 'sdh',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'XtremApp',
'pvUUID': 'tGOZQD-jne2-zI5G-3Rhl-ddeq-sNqn-aVnw90',
'serial': 'SXtremIO_XtremApp_acd0a95230b04328adaff8c1bcb30adb',
'status': 'free',
'vendorID': 'XtremIO',
'vgUUID': ''},
{'GUID': '3514f0c569580044a',
'capacity': '107374182400',
'devtype': 'iSCSI',
'fwrev': '1.0',
'logicalblocksize': '512',
'pathlist': [{'connection': '10.35.160.7',
'initiatorname': 'default',
'iqn': 'iqn.2008-05.com.xtremio:001b21b545c0',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '9',
'physdev': 'sdk',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'XtremApp',
'pvUUID': 'Q9RXF7-Hz3o-nuM3-sPTL-Mn0B-bgsO-zJl68G',
'serial': 'SXtremIO_XtremApp_89676d9bf4664a8684f9257e9c29868b',
'status': 'used',
'vendorID': 'XtremIO',
'vgUUID': ''},
{'GUID': '3514f0c5695800449',
'capacity': '107374182400',
'devtype': 'iSCSI',
'fwrev': '1.0',
'logicalblocksize': '512',
'pathlist': [{'connection': '10.35.160.7',
'initiatorname': 'default',
'iqn': 'iqn.2008-05.com.xtremio:001b21b545c0',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '8',
'physdev': 'sdj',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'XtremApp',
'pvUUID': 'znam4R-GT8a-GQ2O-f6MP-cqUE-xPtZ-2AX0AW',
'serial': 'SXtremIO_XtremApp_30ef4fe650d4414aa8d20046e680d42e',
'status': 'used',
'vendorID': 'XtremIO',
'vgUUID': ''},
{'GUID': '3514f0c5695800448',
'capacity': '107374182400',
'devtype': 'iSCSI',
'fwrev': '1.0',
'logicalblocksize': '512',
'pathlist': [{'connection': '10.35.160.7',
'initiatorname': 'default',
'iqn': 'iqn.2008-05.com.xtremio:001b21b545c0',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '7',
'physdev': 'sdi',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'XtremApp',
'pvUUID': '9tE4XW-qbxE-EFQ1-Tvkz-RQeN-L1PZ-P11Ozw',
'serial': 'SXtremIO_XtremApp_2ae77d0984664996a6d5c750b6cf849f',
'status': 'used',
'vendorID': 'XtremIO',
'vgUUID': ''},
{'GUID': '3514f0c5695800451',
'capacity': '53687091200',
'devtype': 'iSCSI',
'fwrev': '1.0',
'logicalblocksize': '512',
'pathlist': [{'connection': '10.35.160.7',
'initiatorname': 'default',
'iqn': 'iqn.2008-05.com.xtremio:001b21b545c0',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '10',
'physdev': 'sdl',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'XtremApp',
'pvUUID': '',
'serial': 'SXtremIO_XtremApp_5503e7550af34ddf8fc0eb68b051a667',
'status': 'free',
'vendorID': 'XtremIO',
'vgUUID': ''},
{'GUID': '3514f0c5695800452',
'capacity': '53687091200',
'devtype': 'iSCSI',
'fwrev': '1.0',
'logicalblocksize': '512',
'pathlist': [{'connection': '10.35.160.7',
'initiatorname': 'default',
'iqn': 'iqn.2008-05.com.xtremio:001b21b545c0',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '11',
'physdev': 'sdm',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'XtremApp',
'pvUUID': '',
'serial': 'SXtremIO_XtremApp_2b93da968eb94d45881006c53d1a66ed',
'status': 'free',
'vendorID': 'XtremIO',
'vgUUID': ''},
{'GUID': '3514f0c5695800453',
'capacity': '53687091200',
'devtype': 'iSCSI',
'fwrev': '1.0',
'logicalblocksize': '512',
'pathlist': [{'connection': '10.35.160.7',
'initiatorname': 'default',
'iqn': 'iqn.2008-05.com.xtremio:001b21b545c0',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '12',
'physdev': 'sdn',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'XtremApp',
'pvUUID': '',
'serial': 'SXtremIO_XtremApp_482c131a987b44c18a2138a320021660',
'status': 'free',
'vendorID': 'XtremIO',
'vgUUID': ''},
{'GUID': '3514f0c5695800455',
'capacity': '53687091200',
'devtype': 'iSCSI',
'fwrev': '1.0',
'logicalblocksize': '512',
'pathlist': [{'connection': '10.35.160.7',
'initiatorname': 'default',
'iqn': 'iqn.2008-05.com.xtremio:001b21b545c0',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '14',
'physdev': 'sdp',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'XtremApp',
'pvUUID': '6Ts1of-xkql-nG3J-aJti-P3UD-xc9u-gHs8Rd',
'serial': 'SXtremIO_XtremApp_4aeac07b3d3c4360867e5aa3b354fe3a',
'status': 'free',
'vendorID': 'XtremIO',
'vgUUID': ''},
{'GUID': '3514f0c5695800454',
'capacity': '53687091200',
'devtype': 'iSCSI',
'fwrev': '1.0',
'logicalblocksize': '512',
'pathlist': [{'connection': '10.35.160.7',
'initiatorname': 'default',
'iqn': 'iqn.2008-05.com.xtremio:001b21b545c0',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '13',
'physdev': 'sdo',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'XtremApp',
'pvUUID': '',
'serial': 'SXtremIO_XtremApp_0f77c2a3327040a18214548c07e36bf7',
'status': 'free',
'vendorID': 'XtremIO',
'vgUUID': ''},
{'GUID': '3514f0c5695800456',
'capacity': '53687091200',
'devtype': 'iSCSI',
'fwrev': '1.0',
'logicalblocksize': '512',
'pathlist': [{'connection': '10.35.160.7',
'initiatorname': 'default',
'iqn': 'iqn.2008-05.com.xtremio:001b21b545c0',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '15',
'physdev': 'sdq',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'XtremApp',
'pvUUID': '',
'serial': 'SXtremIO_XtremApp_2a28037fc020482dafdea342cac3e683',
'status': 'free',
'vendorID': 'XtremIO',
'vgUUID': ''},
{'GUID': '3514f0c5695800457',
'capacity': '53687091200',
'devtype': 'iSCSI',
'fwrev': '1.0',
'logicalblocksize': '512',
'pathlist': [{'connection': '10.35.160.7',
'initiatorname': 'default',
'iqn': 'iqn.2008-05.com.xtremio:001b21b545c0',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '16',
'physdev': 'sdr',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'XtremApp',
'pvUUID': 'lu5pc5-MdLq-CVbU-pX6V-oKUn-AwxC-07S56M',
'serial': 'SXtremIO_XtremApp_5d401602d4e1418aa5e62d551f87d623',
'status': 'used',
'vendorID': 'XtremIO',
'vgUUID': ''},
{'GUID': '3514f0c5695800459',
'capacity': '53687091200',
'devtype': 'iSCSI',
'fwrev': '1.0',
'logicalblocksize': '512',
'pathlist': [{'connection': '10.35.160.7',
'initiatorname': 'default',
'iqn': 'iqn.2008-05.com.xtremio:001b21b545c0',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '18',
'physdev': 'sdt',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'XtremApp',
'pvUUID': 'r64A2H-XiB0-SR2C-slpT-EMyi-I1Z2-Xmq1V3',
'serial': 'SXtremIO_XtremApp_8d9613fb9cd543a2802ab3ab0e854a5f',
'status': 'used',
'vendorID': 'XtremIO',
'vgUUID': ''},
{'GUID': '3514f0c5695800458',
'capacity': '53687091200',
'devtype': 'iSCSI',
'fwrev': '1.0',
'logicalblocksize': '512',
'pathlist': [{'connection': '10.35.160.7',
'initiatorname': 'default',
'iqn': 'iqn.2008-05.com.xtremio:001b21b545c0',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '17',
'physdev': 'sds',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'XtremApp',
'pvUUID': 'Blz2eM-cC9h-ign4-49Nr-JVAA-ypgW-djrxKb',
'serial': 'SXtremIO_XtremApp_e73f5c8f19dc488bad82f5046e63cc19',
'status': 'used',
'vendorID': 'XtremIO',
'vgUUID': ''},
{'GUID': '3514f0c569580045a',
'capacity': '53687091200',
'devtype': 'iSCSI',
'fwrev': '1.0',
'logicalblocksize': '512',
'pathlist': [{'connection': '10.35.160.7',
'initiatorname': 'default',
'iqn': 'iqn.2008-05.com.xtremio:001b21b545c0',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '19',
'physdev': 'sdu',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'XtremApp',
'pvUUID': 'xGF0aH-ehpG-oJnv-GBn3-lW1y-31JT-xygFn5',
'serial': 'SXtremIO_XtremApp_01e006857da74e6b8511e071202c7249',
'status': 'used',
'vendorID': 'XtremIO',
'vgUUID': ''},
{'GUID': '3514f0c5695800445',
'capacity': '107374182400',
'devtype': 'iSCSI',
'fwrev': '1.0',
'logicalblocksize': '512',
'pathlist': [{'connection': '10.35.160.7',
'initiatorname': 'default',
'iqn': 'iqn.2008-05.com.xtremio:001b21b545c0',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '4',
'physdev': 'sdf',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'XtremApp',
'pvUUID': 'QfVUhZ-9iMH-ff7r-Vco4-cycP-B8gI-HKd0bF',
'serial': 'SXtremIO_XtremApp_9c5eb719d1fb4e0a9744826bd22af164',
'status': 'used',
'vendorID': 'XtremIO',
'vgUUID': ''},
{'GUID': '3514f0c5695800441',
'capacity': '107374182400',
'devtype': 'iSCSI',
'fwrev': '1.0',
'logicalblocksize': '512',
'pathlist': [{'connection': '10.35.160.7',
'initiatorname': 'default',
'iqn': 'iqn.2008-05.com.xtremio:001b21b545c0',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '0',
'physdev': 'sdb',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'XtremApp',
'pvUUID': 'QgEsru-W35V-dyDb-K3ya-Tuuz-3273-3XKSve',
'serial': 'SXtremIO_XtremApp_d2af1696371640aeae07299abfe27521',
'status': 'used',
'vendorID': 'XtremIO',
'vgUUID': ''},
{'GUID': '3514f0c5695800443',
'capacity': '107374182400',
'devtype': 'iSCSI',
'fwrev': '1.0',
'logicalblocksize': '512',
'pathlist': [{'connection': '10.35.160.7',
'initiatorname': 'default',
'iqn': 'iqn.2008-05.com.xtremio:001b21b545c0',
'port': '3260',
'portal': '1'}],
'pathstatus': [{'lun': '2',
'physdev': 'sdd',
'state': 'active',
'type': 'iSCSI'}],
'physicalblocksize': '512',
'productID': 'XtremApp',
'pvUUID': 'b25fRn-GHke-dLUs-FtwB-6uZD-vlNn-u9WXb7',
'serial': 'SXtremIO_XtremApp_5c967d64ade8474f87ca3d35209ccb71',
'status': 'free',
'vendorID': 'XtremIO',
'vgUUID': ''}]

[root@gold-vdsd tmp]# vdsClient -s 0 getDeviceList q^C
[root@gold-vdsd tmp]# pvcreate /dev/mapper/3514f0c5695800458
Can't open /dev/mapper/3514f0c5695800458 exclusively. Mounted filesystem?
[root@gold-vdsd tmp]# lsblk /dev/mapper/3514f0c5695800458
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
3514f0c5695800458 (dm-59) 253:59 0 50G 0 mpath
├─5c397bae--486c--4b16--b263--eba3dcc7f19f-metadata (dm-65) 253:65 0 512M 0 lvm
├─5c397bae--486c--4b16--b263--eba3dcc7f19f-ids (dm-66) 253:66 0 128M 0 lvm
├─5c397bae--486c--4b16--b263--eba3dcc7f19f-leases (dm-67) 253:67 0 2G 0 lvm
├─5c397bae--486c--4b16--b263--eba3dcc7f19f-inbox (dm-86) 253:86 0 128M 0 lvm
├─5c397bae--486c--4b16--b263--eba3dcc7f19f-outbox (dm-87) 253:87 0 128M 0 lvm
└─5c397bae--486c--4b16--b263--eba3dcc7f19f-master (dm-88) 253:88 0 1G 0 lvm
[root@gold-vdsd tmp]# dmsetup remove 5c397bae--486c--4b16--b263--eba3dcc7f19f-metadata 5c397bae--486c--4b16--b263--eba3dcc7f19f-ids 5c397bae--486c--4b16--b263--eba3dcc7f19f-leases 5c397bae--486c--4b16--b263--eba3dcc7f19f-inbox 5c397bae--486c--4b16--b263--eba3dcc7f19f-outbox 5c397bae--486c--4b16--b263--eba3dcc7f19f-master
[root@gold-vdsd tmp]# pvcreate /dev/mapper/3514f0c5695800458
Writing physical volume data to disk "/dev/mapper/3514f0c5695800458"
Physical volume "/dev/mapper/3514f0c5695800458" successfully created
Comment 1 Ayal Baron 2012-09-16 02:46:01 EDT
The log doesn't contain any vgremove operations so we cannot determine why the LV mappings were not removed.
However, this is exactly why you can use 'used' devices and override the data, why did you clean things up manually? did you fail using the override option in the GUI?
Comment 2 Dafna Ron 2012-09-16 03:26:38 EDT
I did not clean anything manually. all domains we removed using the UI. 
log must have rotated but I did find the vgremove logs with device 3514f0c5695800443 on my host and will attach it.
Comment 3 Dafna Ron 2012-09-16 03:27:14 EDT
Created attachment 613381 [details]
vgremove
Comment 4 RHEL Product and Program Management 2012-12-14 02:52:39 EST
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.
Comment 5 Ayal Baron 2013-07-08 15:15:20 EDT
(In reply to Dafna Ron from comment #3)
> Created attachment 613381 [details]
> vgremove

This log does not contain the relevant vgremove.
Need to reproduce this with creation of a domain and then shortly after removal.
I cannot even determine if the vgremove was done on the same machine or whether it was removed on another host.

In addition, does using force (from GUI) to create the domain work?
Comment 7 Elad 2014-02-02 08:43:47 EST
Created attachment 858243 [details]
logs-2.2.14

Re-openning, we managed to reproduce the issue and we have the exact way to reach to it.

It seems that after the scenario steps of this bug - https://bugzilla.redhat.com/show_bug.cgi?id=1059757 in vdsm, when user tries to create a new SD using the LUN(s) which were part of the old SD, from non-SPM host, we get the mentioned error: "Can't open /dev/mapper/3514f0c5695800458 exclusively. Mounted filesystem?"

It happens because device-mapper of the non-SPM host doesn't get updated information regarding the removal of the pv(s).

Uploading the relevant logs (logs-2.2.14).
Comment 8 Sergey Gotliv 2014-02-09 02:52:00 EST
Elad,

Please, review comment #1 and comment #5. New logs don't contain vgremove either and you didn't attach the relevant Engine's log to help understand the flow.
Can you find the relevant Engine log?
Comment 9 Elad 2014-02-09 08:01:25 EST
Created attachment 861019 [details]
logs-9.2.14

Thread-211::DEBUG::2014-02-09 14:51:19,923::lvm::295::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n /sbin/lvm vgremove --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_c
ache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ \'a|/dev/mapper/3514f0c59af400001|/dev/mapper/3514f0c59af400002|/dev/mapper/3514f0c59af400003|/dev/mapper/3514f0c59af400004|/dev/m
apper/3514f0c59af400005|/dev/mapper/360060160f4a030008e918470a37be311|/dev/mapper/360060160f4a0300090918470a37be311|/dev/mapper/360060160f4a0300092918470a37be311|/dev/mapper/360060160f4a0300094918470a37be311|/dev/
mapper/360060160f4a03000949bb85f567ce311|/dev/mapper/360060160f4a0300096918470a37be311|/dev/mapper/360060160f4a030009e73bb88a37be311|/dev/mapper/360060160f4a03000a073bb88a37be311|/dev/mapper/360060160f4a03000a273b
b88a37be311|/dev/mapper/360060160f4a03000a473bb88a37be311|/dev/mapper/360060160f4a03000a673bb88a37be311|/dev/mapper/SServeRA_Disk1_7C65EA2F|\', \'r|.*|\' ] }  global {  locking_type=1  prioritise_write_locks=1  wa
it_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } " -f b92b6975-1e45-4652-bb12-dd33a0da4b3f' (cwd None)
Comment 10 Sergey Gotliv 2014-02-10 04:43:12 EST

*** This bug has been marked as a duplicate of bug 1059757 ***

Note You need to log in before you can comment on or make changes to this bug.