Bug 1726330 - [Cinderlib] - Start vm with 3PAR-ISCSI managed storage domain fails with the error : "Managed Volume is already attached"
Summary: [Cinderlib] - Start vm with 3PAR-ISCSI managed storage domain fails with the ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.3.5.1
Hardware: x86_64
OS: Unspecified
unspecified
high
Target Milestone: ovirt-4.4.1
: ---
Assignee: Benny Zlotnik
QA Contact: Shir Fishbain
URL:
Whiteboard:
Depends On: 1844911
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-02 15:43 UTC by Shir Fishbain
Modified: 2020-07-08 08:24 UTC (History)
6 users (show)

Fixed In Version: rhv-4.4.0-28
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-08 08:24:47 UTC
oVirt Team: Storage
Embargoed:
pm-rhel: ovirt-4.4+


Attachments (Terms of Use)
Logs (2.61 MB, application/zip)
2019-07-03 10:23 UTC, Shir Fishbain
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 101552 0 'None' MERGED core: remove @Singleton annotation from ManagedBlockStorageCommandUtil 2020-07-05 11:46:17 UTC

Description Shir Fishbain 2019-07-02 15:43:46 UTC
Description of problem:
Start VM with 3PAR-ISCSI managed storage domain disk fails with the error: "Managed Volume is already attached"

From vdsm log (3PAR - ISCSI):
2019-07-02 18:30:29,238+0300 ERROR (jsonrpc/5) [api] FINISH attach_volume error=Managed Volume is already attached.: u"vol_id=5821ba47-bc64-4e62-97f7-8e9e3fe0a2fb path=/dev/mapper/360002ac000000000000006c000021f6b attachment={u'path': u'/dev/dm-35', u'scsi_wwn': u'360002ac000000000000006c000021f6b', u'type': u'block', u'multipath_id': u'360002ac000000000000006c000021f6b'}" (api:131)

From engine log :
2019-07-02 18:28:43,629+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-240037) [72719f4] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM host_mixed_1 command AttachManagedBlockStorageVolumeVDS failed: Managed Volume is already attached.: u"vol_id=5821ba47-bc64-4e62-97f7-8e9e3fe0a2fb path=/dev/mapper/360002ac000000000000006c000021f6b attachment={u'path': u'/dev/dm-35', u'scsi_wwn': u'360002ac000000000000006c000021f6b', u'type': u'block', u'multipath_id': u'360002ac000000000000006c000021f6b'}"
2019-07-02 18:28:43,630+03 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-240037) [72719f4] Command 'AttachManagedBlockStorageVolumeVDSCommand(HostName = host_mixed_1, AttachManagedBlockStorageVolumeVDSCommandParameters:{hostId='b170c58a-23e1-4f09-b2ea-055284741f84', vds='Host[host_mixed_1,b170c58a-23e1-4f09-b2ea-055284741f84]'})' execution failed: VDSGenericException: VDSErrorException: Failed to AttachManagedBlockStorageVolumeVDS, error = Managed Volume is already attached.: u"vol_id=5821ba47-bc64-4e62-97f7-8e9e3fe0a2fb path=/dev/mapper/360002ac000000000000006c000021f6b attachment={u'path': u'/dev/dm-35', u'scsi_wwn': u'360002ac000000000000006c000021f6b', u'type': u'block', u'multipath_id': u'360002ac000000000000006c000021f6b'}", code = 927
2019-07-02 18:28:43,645+03 ERROR [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-240037) [72719f4] Command 'org.ovirt.engine.core.bll.RunVmCommand' failed: org.ovirt.engine.core.common.errors.EngineException: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to AttachManagedBlockStorageVolumeVDS, error = Managed Volume is already attached.: u"vol_id=5821ba47-bc64-4e62-97f7-8e9e3fe0a2fb path=/dev/mapper/360002ac000000000000006c000021f6b attachment={u'path': u'/dev/dm-35', u'scsi_wwn': u'360002ac000000000000006c000021f6b', u'type': u'block', u'multipath_id': u'360002ac000000000000006c000021f6b'}", code = 927 (Failed with error unexpected and code 16)
2019-07-02 18:28:43,645+03 ERROR [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-240037) [72719f4] Exception: javax.ejb.EJBException: org.ovirt.engine.core.common.errors.EngineException: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to AttachManagedBlockStorageVolumeVDS, error = Managed Volume is already attached.: u"vol_id=5821ba47-bc64-4e62-97f7-8e9e3fe0a2fb path=/dev/mapper/360002ac000000000000006c000021f6b attachment={u'path': u'/dev/dm-35', u'scsi_wwn': u'360002ac000000000000006c000021f6b', u'type': u'block', u'multipath_id': u'360002ac000000000000006c000021f6b'}", code = 927 (Failed with error unexpected and code 16)
2019-07-02 18:30:20,814+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-240083) [5b9e52fe-fd93-4351-9b58-f7c119ae39a2] EVENT_ID: CONNECTOR_INFO_MISSING_ON_VDS(10,772), Cannot run VM 3par with Managed Block Storage disks on Host host_mixed_2. Connector information is missing. Check if os-brick package is available on the host.
2019-07-02 18:30:29,249+03 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-240083) [21aed6de] Failed in 'AttachManagedBlockStorageVolumeVDS' method
2019-07-02 18:30:29,260+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-240083) [21aed6de] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM host_mixed_1 command AttachManagedBlockStorageVolumeVDS failed: Managed Volume is already attached.: u"vol_id=5821ba47-bc64-4e62-97f7-8e9e3fe0a2fb path=/dev/mapper/360002ac000000000000006c000021f6b attachment={u'path': u'/dev/dm-35', u'scsi_wwn': u'360002ac000000000000006c000021f6b', u'type': u'block', u'multipath_id': u'360002ac000000000000006c000021f6b'}"
2019-07-02 18:30:29,261+03 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.AttachManagedBlockStorageVolumeVDSCommand] (EE-ManagedThreadFactory-engine-Thread-240083) [21aed6de] Command 'AttachManagedBlockStorageVolumeVDSCommand(HostName = host_mixed_1, AttachManagedBlockStorageVolumeVDSCommandParameters:{hostId='b170c58a-23e1-4f09-b2ea-055284741f84', vds='Host[host_mixed_1,b170c58a-23e1-4f09-b2ea-055284741f84]'})' execution failed: VDSGenericException: VDSErrorException: Failed to AttachManagedBlockStorageVolumeVDS, error = Managed Volume is already attached.: u"vol_id=5821ba47-bc64-4e62-97f7-8e9e3fe0a2fb path=/dev/mapper/360002ac000000000000006c000021f6b attachment={u'path': u'/dev/dm-35', u'scsi_wwn': u'360002ac000000000000006c000021f6b', u'type': u'block', u'multipath_id': u'360002ac000000000000006c000021f6b'}", code = 927
2019-07-02 18:30:29,281+03 ERROR [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-240083) [21aed6de] Command 'org.ovirt.engine.core.bll.RunVmCommand' failed: org.ovirt.engine.core.common.errors.EngineException: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to AttachManagedBlockStorageVolumeVDS, error = Managed Volume is already attached.: u"vol_id=5821ba47-bc64-4e62-97f7-8e9e3fe0a2fb path=/dev/mapper/360002ac000000000000006c000021f6b attachment={u'path': u'/dev/dm-35', u'scsi_wwn': u'360002ac000000000000006c000021f6b', u'type': u'block', u'multipath_id': u'360002ac000000000000006c000021f6b'}", code = 927 (Failed with error unexpected and code 16)
2019-07-02 18:30:29,281+03 ERROR [org.ovirt.engine.core.bll.RunVmCommand] (EE-ManagedThreadFactory-engine-Thread-240083) [21aed6de] Exception: javax.ejb.EJBException: org.ovirt.engine.core.common.errors.EngineException: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to AttachManagedBlockStorageVolumeVDS, error = Managed Volume is already attached.: u"vol_id=5821ba47-bc64-4e62-97f7-8e9e3fe0a2fb path=/dev/mapper/360002ac000000000000006c000021f6b attachment={u'path': u'/dev/dm-35', u'scsi_wwn': u'360002ac000000000000006c000021f6b', u'type': u'block', u'multipath_id': u'360002ac000000000000006c000021f6b'}", code = 927 (Failed with error unexpected and code 16)


Version-Release number of selected component (if applicable):
Cinderlib Version: 0.9.0
ovirt-engine-4.3.5.1-0.1.el7.noarch
vdsm-4.30.20-1.el7ev.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Create a managed block storage domain from 3PAR(iscsi/fc) or ceph driver
2.Create VM 
3.Create disk from the storage domain created in step1
4.Attach the disk to the VM
5. Start VM

Actual results:
Start VM fails

Expected results:
Start VM for 3PAR-ISCSI drivers should work

Additional info:

Comment 1 Shir Fishbain 2019-07-02 15:46:40 UTC
Steps to Reproduce:
1.Create a managed block storage domain from 3PAR-ISCSI :
<storage_domain>
   <name>cinder-hp3par2</name>
   <type>managed_block_storage</type>
   <storage>
      <type>managed_block_storage</type>
      <driver_options>
      	<property>
            <name>hpe3par_api_url</name>
            <value>https://3par-cli.mgmt.lab3.tlv.redhat.com:8080/api/v1</value>
         </property>
      	<property>
            <name>san_ip</name>
            <value>10.35.84.14</value>
         </property>
      	<property>
            <name>san_login</name>
            <value>root</value>
         </property>
         <property>
            <name>san_password</name>
            <value>Qum!0net</value>
         </property>
         <property>
            <name>hpe3par_username</name>
            <value>3paredit</value>
         </property>
         <property>
            <name>hpe3par_password</name>
            <value>123456</value>
         </property>
         <property>
            <name>hpe3par_cpg</name>
            <value>SSD_r1</value>
         </property>
         <property>
            <name>volume_driver</name>
            <value>cinder.volume.drivers.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver</value>
         </property>
         <property>
            <name>hpe3par_debug</name>
            <value>False</value>
         </property>
         <property>
            <name>hpe3par_iscsi_chap_enabled</name>
            <value>False</value>
         </property>
         <property>
            <name>max_over_subscription_ratio</name>
            <value>20.0</value>
         </property>
         <property>
            <name>reserved_percentage</name>
            <value>15</value>
         </property>
         <property>
            <name>hpe3par_iscsi_ips</name>
            <value>10.35.146.1,10.35.146.2,10.35.146.3,10.35.146.4</value>
         </property>
      </driver_options>
   </storage>
   <host>
      <name>host_mixed_1</name>
   </host>
</storage_domain>

3.Create disk from the storage domain created in step1
4.Attach the disk to the VM
5. Start VM

Comment 2 Shir Fishbain 2019-07-03 10:23:33 UTC
Created attachment 1587019 [details]
Logs

Comment 8 Michal Skrivanek 2020-04-17 05:42:23 UTC
Please include cinderlib logs

Comment 10 Avihai 2020-04-20 10:15:32 UTC
As this bug can not be verified until bug 1824967 is resolved.

Comment 11 Shir Fishbain 2020-07-05 13:33:17 UTC
Start VM with 3PAR-ISCSI drivers works with the following steps on the hosts:

1) Changing the "openstack-cinderlib rpms" to enable from 0 to 1
[rhel-8-openstack-cinderlib-rpms]
name=rhel-8-openstack-cinderlib-rpms
baseurl=http://download-node-02.eng.bos.redhat.com/rcm-guest/puddles/OpenStack/16.1-RHEL-8/beta-1.0/compose/CinderTools/x86_64/os/
gpgcheck=0
enabled=1
2)Running dnf install -y python3-os-brick

Versions: 
vdsm-4.40.22-1.el8ev.x86_64
ovirt-engine-4.4.1.7-0.3.el8ev.noarch

Comment 12 Sandro Bonazzola 2020-07-08 08:24:47 UTC
This bugzilla is included in oVirt 4.4.1 release, published on July 8th 2020.

Since the problem described in this bug report should be resolved in oVirt 4.4.1 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.