Bug 852655 - [engine-core] user is able to activate disks from SD which in maintenance mode
[engine-core] user is able to activate disks from SD which in maintenance mode
Status: CLOSED WONTFIX
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
3.1.0
x86_64 Linux
unspecified Severity high
: ---
: 3.1.5
Assigned To: Ayal Baron
Haim
storage
: FutureFeature
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-08-29 04:26 EDT by vvyazmin@redhat.com
Modified: 2016-02-10 15:24 EST (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-03-03 16:41:16 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
## Logs vdsm, rhevm (583.07 KB, application/x-gzip)
2012-08-29 04:26 EDT, vvyazmin@redhat.com
no flags Details

  None (edit)
Description vvyazmin@redhat.com 2012-08-29 04:26:25 EDT
Created attachment 607810 [details]
## Logs vdsm, rhevm

Description of problem:
Enable to attache and activate disks from SD which in maintenance mode  

Version-Release number of selected component (if applicable):
Verified on RHEVM 3.1 - SI15.1

RHEVM: rhevm-3.1.0-13.el6ev.noarch
VDSM: vdsm-4.9.6-30.0.el6_3.x86_64
LIBVIRT: libvirt-0.9.10-21.el6.x86_64
QEMU & KVM: qemu-kvm-rhev-0.12.1.2-2.298.el6_3.x86_64
SANLOCK: sanlock-2.3-3.el6_3.x86_64


How reproducible:
100%

Steps to Reproduce:
1. Create 2 storage domain on iSCSI DC
2. Create VM with 2 Thin-Provision disk on first SD-01, and 10 Prealocated disks on second SD-02
3. Install RHEL 6.3 on VM
4. Deactivate 10 disks (HotUnPlugDisk)
5. Remove those 10 disk
6. Enter to Maintenance second SD-02
7. Attached 10 disk to VM
  
Actual results:
On "Add Virtual Disk" menu, I can't create a disks on second SD-02 (in drop down menu, all SD that in maintenance mode are not appear) but, I can see and add disks ("Attach Disk" check-box) from SD which in maintenance mode.

Expected results:
Prevent display and attach disks, via "Attach Disk" menu, from SD which in maintenance mode.

Additional info:

*** vdsm log ***
Thread-390:EBUG::2012-08-29 09:06:57,782::task::872::TaskManager.Task:_run) Task=`60661d62-a115-4cbb-bd06-6f05154ed36d`::Task._run: 60661d62-a115-4cbb-bd06-6f05154ed36d ('05
1fc268-41ec-4f0a-8f43-b97cc00ef301', '019f7b52-d4b9-43e4-968a-49b87f036351', 'b3ec311f-8ffd-459b-9cef-66d8236f51a7', '8774865d-5ddc-451d-8887-ad2535423cb0') {} failed - stopping
 task
Thread-390:EBUG::2012-08-29 09:06:57,783::task::1199::TaskManager.Task:stop) Task=`60661d62-a115-4cbb-bd06-6f05154ed36d`::stopping in state preparing (force False)
Thread-390:EBUG::2012-08-29 09:06:57,783::task::978::TaskManager.Task:_decref) Task=`60661d62-a115-4cbb-bd06-6f05154ed36d`::ref 1 aborting True
Thread-390::INFO::2012-08-29 09:06:57,783::task::1157::TaskManager.Task:prepare) Task=`60661d62-a115-4cbb-bd06-6f05154ed36d`::aborting: Task is aborted: 'Image path does not e
xist or cannot be accessed/created' - code 254
Thread-390:EBUG::2012-08-29 09:06:57,784::task::1162::TaskManager.Task:prepare) Task=`60661d62-a115-4cbb-bd06-6f05154ed36d`:repare: aborted: Image path does not exist or c
annot be accessed/created
Thread-390:EBUG::2012-08-29 09:06:57,784::task::978::TaskManager.Task:_decref) Task=`60661d62-a115-4cbb-bd06-6f05154ed36d`::ref 0 aborting True
Thread-390:EBUG::2012-08-29 09:06:57,784::task::913::TaskManager.Task:_doAbort) Task=`60661d62-a115-4cbb-bd06-6f05154ed36d`::Task._doAbort: force False
Thread-390:EBUG::2012-08-29 09:06:57,785::resourceManager::844::ResourceManager.Owner:cancelAll) Owner.cancelAll requests {}
Thread-390:EBUG::2012-08-29 09:06:57,785::task::588::TaskManager.Task:_updateState) Task=`60661d62-a115-4cbb-bd06-6f05154ed36d`::moving from state preparing -> state abortin
g
Thread-390:EBUG::2012-08-29 09:06:57,785::task::537::TaskManager.Task:__state_aborting) Task=`60661d62-a115-4cbb-bd06-6f05154ed36d`::_aborting: recover policy none
Thread-390:EBUG::2012-08-29 09:06:57,786::task::588::TaskManager.Task:_updateState) Task=`60661d62-a115-4cbb-bd06-6f05154ed36d`::moving from state aborting -> state failed
Thread-390:EBUG::2012-08-29 09:06:57,786::resourceManager::809::ResourceManager.Owner:releaseAll) Owner.releaseAll requests {} resources {'Storage.051fc268-41ec-4f0a-8f43-b9
7cc00ef301': < ResourceRef 'Storage.051fc268-41ec-4f0a-8f43-b97cc00ef301', isValid: 'True' obj: 'None'>}
Thread-390:EBUG::2012-08-29 09:06:57,786::resourceManager::844::ResourceManager.Owner:cancelAll) Owner.cancelAll requests {}
Thread-390:EBUG::2012-08-29 09:06:57,787::resourceManager::538::ResourceManager:releaseResource) Trying to release resource 'Storage.051fc268-41ec-4f0a-8f43-b97cc00ef301'
Thread-390:EBUG::2012-08-29 09:06:57,787::resourceManager::553::ResourceManager:releaseResource) Released resource 'Storage.051fc268-41ec-4f0a-8f43-b97cc00ef301' (0 active u
sers)
Thread-390:EBUG::2012-08-29 09:06:57,787::resourceManager::558::ResourceManager:releaseResource) Resource 'Storage.051fc268-41ec-4f0a-8f43-b97cc00ef301' is free, finding out
 if anyone is waiting for it.
Thread-390:EBUG::2012-08-29 09:06:57,788::resourceManager::565::ResourceManager:releaseResource) No one is waiting for resource 'Storage.051fc268-41ec-4f0a-8f43-b97cc00ef301
', Clearing records.
Thread-390::ERROR::2012-08-29 09:06:57,788:ispatcher::66::Storage.Dispatcher.Protect:run) {'status': {'message': "Image path does not exist or cannot be accessed/created: ('
/rhev/data-center/019f7b52-d4b9-43e4-968a-49b87f036351/051fc268-41ec-4f0a-8f43-b97cc00ef301/images/b3ec311f-8ffd-459b-9cef-66d8236f51a7',)", 'code': 254}}
Thread-390::ERROR::2012-08-29 09:06:57,788::BindingXMLRPC::879::vds:wrapper) unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/BindingXMLRPC.py", line 869, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/BindingXMLRPC.py", line 250, in vmHotplugDisk
    return vm.hotplugDisk(params)
  File "/usr/share/vdsm/API.py", line 392, in hotplugDisk
    return curVm.hotplugDisk(params)
  File "/usr/share/vdsm/libvirtvm.py", line 1481, in hotplugDisk
    diskParams['path'] = self.cif.prepareVolumePath(diskParams)
  File "/usr/share/vdsm/clientIF.py", line 183, in prepareVolumePath
    raise vm.VolumeError(drive)
VolumeError: Bad volume specification {'iface': 'virtio', 'format': 'raw', 'type': 'disk', 'volumeID': '8774865d-5ddc-451d-8887-ad2535423cb0', 'imageID': 'b3ec311f-8ffd-459b-9cef-66d8236f51a7', 'readonly': 'false', 'domainID': '051fc268-41ec-4f0a-8f43-b97cc00ef301', 'poolID': '019f7b52-d4b9-43e4-968a-49b87f036351', 'device': 'disk', 'shared': 'false', 'propagateErrors': 'off', 'optional': 'false'}
Thread-393:EBUG::2012-08-29 09:06:57,892::BindingXMLRPC::864::vds:wrapper) client [10.35.97.56]::call vmHotplugDisk with ({'vmId': 'ad3cf3a6-59d0-4272-ad2d-6ddec168adf0', 'drive': {'iface': 'virtio', 'format': 'raw', 'type': 'disk', 'volumeID': '1acfb3b9-de74-44c0-ac54-64fce40e16e4', 'imageID': '17a842c6-3f67-429c-a86f-35141136542c', 'readonly': 'false', 'domainID': '051fc268-41ec-4f0a-8f43-b97cc00ef301', 'poolID': '019f7b52-d4b9-43e4-968a-49b87f036351', 'device': 'disk', 'shared': 'false', 'propagateErrors': 'off', 'optional': 'false'}},) {} flowID [50a1a759]
Thread-393:EBUG::2012-08-29 09:06:57,893::task::588::TaskManager.Task:_updateState) Task=`4a6a8f40-7385-4c08-8533-30472e2ac062`::moving from state init -> state preparing
Thread-393::INFO::2012-08-29 09:06:57,894::logUtils::37:ispatcher:wrapper) Run and protect: prepareImage(sdUUID='051fc268-41ec-4f0a-8f43-b97cc00ef301', spUUID='019f7b52-d4b9-43e4-968a-49b87f036351', imgUUID='17a842c6-3f67-429c-a86f-35141136542c', volUUID='1acfb3b9-de74-44c0-ac54-64fce40e16e4')
Thread-393:EBUG::2012-08-29 09:06:57,894::resourceManager::175::ResourceManager.Request:__init__) ResName=`Storage.051fc268-41ec-4f0a-8f43-b97cc00ef301`ReqID=`003c1796-dcbd-4011-a170-85c31cf43d29`::Request was made in '/usr/share/vdsm/storage/resourceManager.py' line '485' at 'registerResource'
Thread-393:EBUG::2012-08-29 09:06:57,894::resourceManager::486::ResourceManager:registerResource) Trying to register resource 'Storage.051fc268-41ec-4f0a-8f43-b97cc00ef301' for lock type 'shared'
Thread-393:EBUG::2012-08-29 09:06:57,895::resourceManager::528::ResourceManager:registerResource) Resource 'Storage.051fc268-41ec-4f0a-8f43-b97cc00ef301' is free. Now locking as 'shared' (1 active user)
Thread-393:EBUG::2012-08-29 09:06:57,895::resourceManager::212::ResourceManager.Request:grant) ResName=`Storage.051fc268-41ec-4f0a-8f43-b97cc00ef301`ReqID=`003c1796-dcbd-4011-a170-85c31cf43d29`::Granted request
Thread-393:EBUG::2012-08-29 09:06:57,896::task::817::TaskManager.Task:resourceAcquired) Task=`4a6a8f40-7385-4c08-8533-30472e2ac062`::_resourcesAcquired: Storage.051fc268-41ec-4f0a-8f43-b97cc00ef301 (shared)
Thread-393:EBUG::2012-08-29 09:06:57,896::task::978::TaskManager.Task:_decref) Task=`4a6a8f40-7385-4c08-8533-30472e2ac062`::ref 1 aborting False
Thread-393::ERROR::2012-08-29 09:06:57,897::blockVolume::401::Storage.Volume:validateImagePath) Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/blockVolume.py", line 399, in validateImagePath
    os.mkdir(imageDir, 0755)
OSError: [Errno 2] No such file or directory: '/rhev/data-center/019f7b52-d4b9-43e4-968a-49b87f036351/051fc268-41ec-4f0a-8f43-b97cc00ef301/images/17a842c6-3f67-429c-a86f-35141136542c'
Thread-393::ERROR::2012-08-29 09:06:57,898::task::853::TaskManager.Task:_setError) Task=`4a6a8f40-7385-4c08-8533-30472e2ac062`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 861, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 38, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 2759, in prepareImage
    imgVolumes = img.prepare(sdUUID, imgUUID, volUUID)
  File "/usr/share/vdsm/storage/image.py", line 339, in prepare
    chain = self.getChain(sdUUID, imgUUID, volUUID)
  File "/usr/share/vdsm/storage/image.py", line 275, in getChain
    srcVol = volclass(self.repoPath, sdUUID, imgUUID, volUUID)
  File "/usr/share/vdsm/storage/blockVolume.py", line 77, in __init__
    volume.Volume.__init__(self, repoPath, sdUUID, imgUUID, volUUID)
  File "/usr/share/vdsm/storage/volume.py", line 127, in __init__
    self.validate()
  File "/usr/share/vdsm/storage/blockVolume.py", line 86, in validate
    volume.Volume.validate(self)
  File "/usr/share/vdsm/storage/volume.py", line 139, in validate
    self.validateImagePath()
  File "/usr/share/vdsm/storage/blockVolume.py", line 402, in validateImagePath
    raise se.ImagePathError(imageDir)
ImagePathError: Image path does not exist or cannot be accessed/created: ('/rhev/data-center/019f7b52-d4b9-43e4-968a-49b87f036351/051fc268-41ec-4f0a-8f43-b97cc00ef301/images/17a842c6-3f67-429c-a86f-35141136542c',)
Thread-393:EBUG::2012-08-29 09:06:57,898::task::872::TaskManager.Task:_run) Task=`4a6a8f40-7385-4c08-8533-30472e2ac062`::Task._run: 4a6a8f40-7385-4c08-8533-30472e2ac062 ('051fc268-41ec-4f0a-8f43-b97cc00ef301', '019f7b52-d4b9-43e4-968a-49b87f036351', '17a842c6-3f67-429c-a86f-35141136542c', '1acfb3b9-de74-44c0-ac54-64fce40e16e4') {} failed - stopping task


*** rhevm log ***

2012-08-29 09:05:25,584 INFO  [org.ovirt.engine.core.bll.AttachDiskToVmCommand] (pool-4-thread-42) [206c4979] Running command: AttachDiskToVmCommand internal: false. Entities af
fected :  ID: ad3cf3a6-59d0-4272-ad2d-6ddec168adf0 Type: VM,  ID: b3ec311f-8ffd-459b-9cef-66d8236f51a7 Type: Disk
2012-08-29 09:05:25,604 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (pool-4-thread-42) [206c4979] START, HotPlugDiskVDSCommand(vdsId = 357666a8-f053-
11e1-b63d-001a4a169738, vmId=ad3cf3a6-59d0-4272-ad2d-6ddec168adf0, volumeId = b3ec311f-8ffd-459b-9cef-66d8236f51a7), log id: 144054f1
2012-08-29 09:05:25,988 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-4-thread-42) [206c4979] Failed in HotPlugDiskVDS method
2012-08-29 09:05:25,988 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-4-thread-42) [206c4979] Error code unexpected and error message VDSGenericExcep
tion: VDSErrorException: Failed to HotPlugDiskVDS, error = Unexpected exception
2012-08-29 09:05:25,988 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-4-thread-42) [206c4979] Command org.ovirt.engine.core.vdsbroker.vdsbroker.HotPl
ugDiskVDSCommand return value 
 Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusOnlyReturnForXmlRpc
mStatus                       Class Name: org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc
mCode                         16
mMessage                      Unexpected exception


2012-08-29 09:05:25,988 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-4-thread-42) [206c4979] Vds: Cougar08
2012-08-29 09:05:25,988 ERROR [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-4-thread-42) [206c4979] Command HotPlugDiskVDS execution failed. Exception: VDSErrorExceptio
n: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = Unexpected exception
2012-08-29 09:05:25,988 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (pool-4-thread-42) [206c4979] FINISH, HotPlugDiskVDSCommand, log id: 144054f1
2012-08-29 09:05:25,988 ERROR [org.ovirt.engine.core.bll.AttachDiskToVmCommand] (pool-4-thread-42) [206c4979] Command org.ovirt.engine.core.bll.AttachDiskToVmCommand throw Vdc B
ll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = Unexpected exception
2012-08-29 09:05:25,994 ERROR [org.ovirt.engine.core.bll.AttachDiskToVmCommand] (pool-4-thread-42) [206c4979] Transaction rolled-back for command: org.ovirt.engine.core.bll.AttachDiskToVmCommand.
2012-08-29 09:05:26,002 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (pool-4-thread-42) [206c4979] No severity for USER_FAILED_ATTACH_DISK_TO_VM type
2012-08-29 09:05:26,037 INFO  [org.ovirt.engine.core.bll.AttachDiskToVmCommand] (pool-4-thread-42) [50a1a759] Running command: AttachDiskToVmCommand internal: false. Entities affected :  ID: ad3cf3a6-59d0-4272-ad2d-6ddec168adf0 Type: VM,  ID: 17a842c6-3f67-429c-a86f-35141136542c Type: Disk
2012-08-29 09:05:26,064 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand] (pool-4-thread-42) [50a1a759] START, HotPlugDiskVDSCommand(vdsId = 357666a8-f053-11e1-b63d-001a4a169738, vmId=ad3cf3a6-59d0-4272-ad2d-6ddec168adf0, volumeId = 17a842c6-3f67-429c-a86f-35141136542c), log id: 1642e9f4
2012-08-29 09:05:26,101 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-4-thread-42) [50a1a759] Failed in HotPlugDiskVDS method
2012-08-29 09:05:26,101 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-4-thread-42) [50a1a759] Error code unexpected and error message VDSGenericException: VDSErrorException: Failed to HotPlugDiskVDS, error = Unexpected exception
2012-08-29 09:05:26,101 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase] (pool-4-thread-42) [50a1a759] Command org.ovirt.engine.core.vdsbroker.vdsbroker.HotPlugDiskVDSCommand return value
Comment 1 Allon Mureinik 2012-08-30 02:48:23 EDT
This bug is unrelated to hot unplugging - it's a UI filtering issue.

simpler reproductions:
1. Create 2 SDs (type is irrelevant)
2. Create a disk on each SD
3. Disable one of the SDs
4. Create a VM
5. Attempt to attach a disk to the VM - both disks are visible in the menu (instead of just the one on the active SD).
Comment 2 Ayal Baron 2012-09-03 09:35:45 EDT
Removing the 'attach' part because there is no reason to block users from attaching disks to VMs.  This is a purely logical operation.
Comment 3 Itamar Heim 2013-03-03 16:41:16 EST
Closing old bugs. If this issue is still relevant/important in current version, please re-open the bug.

Note You need to log in before you can comment on or make changes to this bug.