Bug 1532546 - [Cinder] - Removing disk manually from storage provider does not change disk's status to ILLEGAL
Summary: [Cinder] - Removing disk manually from storage provider does not change disk'...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.2.1
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: ---
: ---
Assignee: Fred Rolland
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks: 1539837
TreeView+ depends on / blocked
 
Reported: 2018-01-09 09:51 UTC by Lilach Zitnitski
Modified: 2022-06-27 08:01 UTC (History)
4 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2018-07-30 08:34:56 UTC
oVirt Team: Storage
Embargoed:
sbonazzo: ovirt-4.3-


Attachments (Terms of Use)
logs (566.96 KB, application/zip)
2018-01-09 09:52 UTC, Lilach Zitnitski
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-46609 0 None None None 2022-06-27 08:01:56 UTC

Description Lilach Zitnitski 2018-01-09 09:51:52 UTC
Description of problem:
When deleting the directory or the lv of a specific disk manually from the storage, the disk status remains OK even after more than a day.

Version-Release number of selected component (if applicable):
rhvm-4.2.1-0.2.el7.noarch
vdsm-4.20.11-1.el7ev.x86_64

(also reproducible in 4.1)

How reproducible:
100%

Steps to Reproduce:
1. create new disk
2. delete the directory from the storage domain 

Actual results:
The disk's status remains OK

Expected results:
Disk's status should change to ILLEGAL

Additional info:

In the logs I couldn't find any relevant errors or warnings.

Disk in status 1 (OK) 

-[ RECORD 1 ]---------+-------------------------------------
image_guid            | 4e308e90-47c0-4368-a81a-53598e9faefd
creation_date         | 2018-01-09 11:17:53+02
size                  | 5368709120
it_guid               | 00000000-0000-0000-0000-000000000000
parentid              | 00000000-0000-0000-0000-000000000000
imagestatus           | 1
lastmodified          | 1970-01-01 02:00:00+02
vm_snapshot_id        |
volume_type           | 2
volume_format         | 5
image_group_id        | e61a70f5-7b92-4444-9fe5-852ae1884faa
_create_date          | 2018-01-09 11:17:53.22566+02
_update_date          | 2018-01-09 11:18:06.467112+02
active                | t
volume_classification | 0
qcow_compat           | 0


Directories under this storage domain

├── 6f474314-d1b3-492e-825c-ab18204d9973
│   ├── 9294ef9e-87d9-4a85-b2c6-49a0155fcd50
│   ├── 9294ef9e-87d9-4a85-b2c6-49a0155fcd50.lease
│   └── 9294ef9e-87d9-4a85-b2c6-49a0155fcd50.meta
├── 703fbd13-0164-470a-aaaa-d5494e745919
│   ├── 04f1ef76-a305-48fb-86a0-ffa656c67578
│   ├── 04f1ef76-a305-48fb-86a0-ffa656c67578.lease
│   ├── 04f1ef76-a305-48fb-86a0-ffa656c67578.meta
│   ├── 9294ef9e-87d9-4a85-b2c6-49a0155fcd50
│   ├── 9294ef9e-87d9-4a85-b2c6-49a0155fcd50.lease
│   └── 9294ef9e-87d9-4a85-b2c6-49a0155fcd50.meta
├── 8e24f8f9-8eb3-466b-8692-05e6ce044a28
│   ├── 01c56c9d-738f-4465-acaa-50da72f76058
│   ├── 01c56c9d-738f-4465-acaa-50da72f76058.lease
│   ├── 01c56c9d-738f-4465-acaa-50da72f76058.meta
│   ├── 9294ef9e-87d9-4a85-b2c6-49a0155fcd50
│   ├── 9294ef9e-87d9-4a85-b2c6-49a0155fcd50.lease
│   └── 9294ef9e-87d9-4a85-b2c6-49a0155fcd50.meta
├── 9f3a3cd8-634b-4959-8a32-bdab43c5ae89
│   ├── d0f8e809-e303-4da8-88d5-850c462073d9
│   ├── d0f8e809-e303-4da8-88d5-850c462073d9.lease
│   └── d0f8e809-e303-4da8-88d5-850c462073d9.meta
├── a9783ca4-92b8-4c5f-b1d3-f657e7e947b6
│   ├── 26f9ff6a-d9c3-4a8e-824d-2ac355038e86
│   ├── 26f9ff6a-d9c3-4a8e-824d-2ac355038e86.lease
│   └── 26f9ff6a-d9c3-4a8e-824d-2ac355038e86.meta
└── e61a70f5-7b92-4444-9fe5-852ae1884faa

Comment 1 Lilach Zitnitski 2018-01-09 09:52:19 UTC
Created attachment 1378924 [details]
logs

Comment 2 Allon Mureinik 2018-01-09 13:06:42 UTC
RHV doesn't proactively monitor disks existence, and (silently) assumes that nobody will mess with the underlying storage. Next time you try to use it, it will error out, and should be changed to ILLEGAL.

Comment 3 Lilach Zitnitski 2018-01-10 12:08:47 UTC
I tried to test this a little bit, and I tried to attach the disk to vm and start it, copy the disk and move it. 
In all cases I failed to use the disk, but it remained in status OK and didn't change to ILLEGAL.

Comment 4 Allon Mureinik 2018-02-12 13:15:23 UTC
(In reply to Lilach Zitnitski from comment #3)
> I tried to test this a little bit, and I tried to attach the disk to vm and
> start it, copy the disk and move it. 
> In all cases I failed to use the disk, but it remained in status OK and
> didn't change to ILLEGAL.
Sorry for the late reply. Missed this needinfo while travelling.

Yaniv - Do we want to have these bugs filed and track them accordingly.
TBH, I don't think this will ever be a priority (although it's definitely a bug).

Comment 5 Yaniv Lavi 2018-02-14 12:56:12 UTC
(In reply to Allon Mureinik from comment #4)
> (In reply to Lilach Zitnitski from comment #3)
> > I tried to test this a little bit, and I tried to attach the disk to vm and
> > start it, copy the disk and move it. 
> > In all cases I failed to use the disk, but it remained in status OK and
> > didn't change to ILLEGAL.
> Sorry for the late reply. Missed this needinfo while travelling.
> 
> Yaniv - Do we want to have these bugs filed and track them accordingly.
> TBH, I don't think this will ever be a priority (although it's definitely a
> bug).

This is not interesting for storage domain as we only support the management flow via the engine. This would be relevant to Cinder though (since it is its own management api).
Do you want this to track that?

Comment 6 Allon Mureinik 2018-02-14 13:01:49 UTC
(In reply to Yaniv Lavi from comment #5)

> This is not interesting for storage domain as we only support the management
> flow via the engine. This would be relevant to Cinder though (since it is
> its own management api).
> Do you want this to track that?

That was the question - we either use this BZ to track the usecase a disk was removed "behind our back" in Cinder, or just close it as WONTFIX.

Comment 8 Tal Nisan 2018-07-30 08:34:56 UTC
As stated before we currently not actively monitor disks' status, if we'd like this situation to change for Cinder or in general feel free to reopen


Note You need to log in before you can comment on or make changes to this bug.