Description of problem: When deleting the directory or the lv of a specific disk manually from the storage, the disk status remains OK even after more than a day. Version-Release number of selected component (if applicable): rhvm-4.2.1-0.2.el7.noarch vdsm-4.20.11-1.el7ev.x86_64 (also reproducible in 4.1) How reproducible: 100% Steps to Reproduce: 1. create new disk 2. delete the directory from the storage domain Actual results: The disk's status remains OK Expected results: Disk's status should change to ILLEGAL Additional info: In the logs I couldn't find any relevant errors or warnings. Disk in status 1 (OK) -[ RECORD 1 ]---------+------------------------------------- image_guid | 4e308e90-47c0-4368-a81a-53598e9faefd creation_date | 2018-01-09 11:17:53+02 size | 5368709120 it_guid | 00000000-0000-0000-0000-000000000000 parentid | 00000000-0000-0000-0000-000000000000 imagestatus | 1 lastmodified | 1970-01-01 02:00:00+02 vm_snapshot_id | volume_type | 2 volume_format | 5 image_group_id | e61a70f5-7b92-4444-9fe5-852ae1884faa _create_date | 2018-01-09 11:17:53.22566+02 _update_date | 2018-01-09 11:18:06.467112+02 active | t volume_classification | 0 qcow_compat | 0 Directories under this storage domain ├── 6f474314-d1b3-492e-825c-ab18204d9973 │ ├── 9294ef9e-87d9-4a85-b2c6-49a0155fcd50 │ ├── 9294ef9e-87d9-4a85-b2c6-49a0155fcd50.lease │ └── 9294ef9e-87d9-4a85-b2c6-49a0155fcd50.meta ├── 703fbd13-0164-470a-aaaa-d5494e745919 │ ├── 04f1ef76-a305-48fb-86a0-ffa656c67578 │ ├── 04f1ef76-a305-48fb-86a0-ffa656c67578.lease │ ├── 04f1ef76-a305-48fb-86a0-ffa656c67578.meta │ ├── 9294ef9e-87d9-4a85-b2c6-49a0155fcd50 │ ├── 9294ef9e-87d9-4a85-b2c6-49a0155fcd50.lease │ └── 9294ef9e-87d9-4a85-b2c6-49a0155fcd50.meta ├── 8e24f8f9-8eb3-466b-8692-05e6ce044a28 │ ├── 01c56c9d-738f-4465-acaa-50da72f76058 │ ├── 01c56c9d-738f-4465-acaa-50da72f76058.lease │ ├── 01c56c9d-738f-4465-acaa-50da72f76058.meta │ ├── 9294ef9e-87d9-4a85-b2c6-49a0155fcd50 │ ├── 9294ef9e-87d9-4a85-b2c6-49a0155fcd50.lease │ └── 9294ef9e-87d9-4a85-b2c6-49a0155fcd50.meta ├── 9f3a3cd8-634b-4959-8a32-bdab43c5ae89 │ ├── d0f8e809-e303-4da8-88d5-850c462073d9 │ ├── d0f8e809-e303-4da8-88d5-850c462073d9.lease │ └── d0f8e809-e303-4da8-88d5-850c462073d9.meta ├── a9783ca4-92b8-4c5f-b1d3-f657e7e947b6 │ ├── 26f9ff6a-d9c3-4a8e-824d-2ac355038e86 │ ├── 26f9ff6a-d9c3-4a8e-824d-2ac355038e86.lease │ └── 26f9ff6a-d9c3-4a8e-824d-2ac355038e86.meta └── e61a70f5-7b92-4444-9fe5-852ae1884faa
Created attachment 1378924 [details] logs
RHV doesn't proactively monitor disks existence, and (silently) assumes that nobody will mess with the underlying storage. Next time you try to use it, it will error out, and should be changed to ILLEGAL.
I tried to test this a little bit, and I tried to attach the disk to vm and start it, copy the disk and move it. In all cases I failed to use the disk, but it remained in status OK and didn't change to ILLEGAL.
(In reply to Lilach Zitnitski from comment #3) > I tried to test this a little bit, and I tried to attach the disk to vm and > start it, copy the disk and move it. > In all cases I failed to use the disk, but it remained in status OK and > didn't change to ILLEGAL. Sorry for the late reply. Missed this needinfo while travelling. Yaniv - Do we want to have these bugs filed and track them accordingly. TBH, I don't think this will ever be a priority (although it's definitely a bug).
(In reply to Allon Mureinik from comment #4) > (In reply to Lilach Zitnitski from comment #3) > > I tried to test this a little bit, and I tried to attach the disk to vm and > > start it, copy the disk and move it. > > In all cases I failed to use the disk, but it remained in status OK and > > didn't change to ILLEGAL. > Sorry for the late reply. Missed this needinfo while travelling. > > Yaniv - Do we want to have these bugs filed and track them accordingly. > TBH, I don't think this will ever be a priority (although it's definitely a > bug). This is not interesting for storage domain as we only support the management flow via the engine. This would be relevant to Cinder though (since it is its own management api). Do you want this to track that?
(In reply to Yaniv Lavi from comment #5) > This is not interesting for storage domain as we only support the management > flow via the engine. This would be relevant to Cinder though (since it is > its own management api). > Do you want this to track that? That was the question - we either use this BZ to track the usecase a disk was removed "behind our back" in Cinder, or just close it as WONTFIX.
As stated before we currently not actively monitor disks' status, if we'd like this situation to change for Cinder or in general feel free to reopen