Bug 1570368

Summary: GET diskattachments for a vm is missing a logical_name of snapshot disks (that are attached to another vm)
Product: [oVirt] ovirt-engine Reporter: Natalie Gavrielov <ngavrilo>
Component: RestAPIAssignee: Liran Rotenberg <lrotenbe>
Status: CLOSED CURRENTRELEASE QA Contact: sshmulev
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.2.2.6CC: aefrat, ahadas, bugs, frolland, michal.skrivanek, ngavrilo, sshmulev, tnisan
Target Milestone: ovirt-4.4.8Keywords: Automation
Target Release: ---Flags: pm-rhel: ovirt-4.5?
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-09-12 15:43:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Virt RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
logs: engine, vdsm, art_log (automation)
none
ovirt-guest-agent.log none

Description Natalie Gavrielov 2018-04-22 11:51:36 UTC
Created attachment 1425310 [details]
logs: engine, vdsm, art_log (automation)

Description of problem:
GET diskattachments is missing logical_name field of a snapshot disk (that is attached to another vm).

Version-Release number of selected component (if applicable):
rhvm-4.2.2.6-0.1.el7.noarch
vdsm-4.20.23-1.el7ev.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Create a vm, with 1 disk, and create a snapshot.
2. Create a second vm (backup vm) with 2 disks (no snapshot)
3. Attach the snapshot disk of the source vm (from step 1) to the backup vm (from step 2)
4. Start vms
5. Get disk attachments for the backup vm.

Actual results:
Disk attachment of the snapshot disk (the one in the middle) is missing logical_name for backup vm:

https://ovirt_engine/ovirt-engine/api/vms/97ddce8c-7f8b-43fb-899e-5e590b9fc701/diskattachments

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<disk_attachments>
  <disk_attachment href="/ovirt-engine/api/vms/97ddce8c-7f8b-43fb-899e-5e590b9fc701/diskattachments/1e619887-d31a-429e-90c0-4bbd37f478f4" id="1e619887-d31a-429e-90c0-4bbd37f478f4">
    <active>true</active>
    <bootable>true</bootable>
    <interface>virtio</interface>
    <logical_name>/dev/vda</logical_name>
    <pass_discard>false</pass_discard>
    <read_only>false</read_only>
    <uses_scsi_reservation>false</uses_scsi_reservation>
    <disk href="/ovirt-engine/api/disks/1e619887-d31a-429e-90c0-4bbd37f478f4" id="1e619887-d31a-429e-90c0-4bbd37f478f4" />
    <vm href="/ovirt-engine/api/vms/97ddce8c-7f8b-43fb-899e-5e590b9fc701" id="97ddce8c-7f8b-43fb-899e-5e590b9fc701" />
  </disk_attachment>
  <disk_attachment href="/ovirt-engine/api/vms/97ddce8c-7f8b-43fb-899e-5e590b9fc701/diskattachments/4a3da19a-cbe8-4121-b901-534e3e692953" id="4a3da19a-cbe8-4121-b901-534e3e692953">
    <active>true</active>
    <bootable>false</bootable>
    <interface>virtio</interface>          <-- Aftr this line should be logical_name
    <pass_discard>false</pass_discard>
    <read_only>false</read_only>
    <uses_scsi_reservation>false</uses_scsi_reservation>
    <disk href="/ovirt-engine/api/disks/4a3da19a-cbe8-4121-b901-534e3e692953" id="4a3da19a-cbe8-4121-b901-534e3e692953" />
    <vm href="/ovirt-engine/api/vms/97ddce8c-7f8b-43fb-899e-5e590b9fc701" id="97ddce8c-7f8b-43fb-899e-5e590b9fc701" />
  </disk_attachment>
  <disk_attachment href="/ovirt-engine/api/vms/97ddce8c-7f8b-43fb-899e-5e590b9fc701/diskattachments/55877977-1ff6-4df5-a8be-18190256d721" id="55877977-1ff6-4df5-a8be-18190256d721">
    <active>true</active>
    <bootable>false</bootable>
    <interface>virtio</interface>
    <logical_name>/dev/vdb</logical_name>
    <pass_discard>false</pass_discard>
    <read_only>false</read_only>
    <uses_scsi_reservation>false</uses_scsi_reservation>
    <disk href="/ovirt-engine/api/disks/55877977-1ff6-4df5-a8be-18190256d721" id="55877977-1ff6-4df5-a8be-18190256d721" />
    <vm href="/ovirt-engine/api/vms/97ddce8c-7f8b-43fb-899e-5e590b9fc701" id="97ddce8c-7f8b-43fb-899e-5e590b9fc701" />
  </disk_attachment>
</disk_attachments>


Expected results:
logical_name should be there for each disk attachments.

Additional info:
1. The disk attachment operation from the art_log:
2018-04-22 12:11:30,290 - MainThread - diskattachments - DEBUG - CREATE request content is --  url:/ovirt-engine/api/vms/89ff2a96-fca3-44b2-ab8c-4c3edaef4381/diskattachments body:<disk_attachment id="fa13ce2b-d12c-43f7-95c5-3381ee307942">
    <active>true</active>
    <interface>virtio</interface>
    <disk id="fa13ce2b-d12c-43f7-95c5-3381ee307942">
        <snapshot id="32152584-e176-4eda-aabc-22462f02a9b3"/>
    </disk>
</disk_attachment>

2. source vm disk attachments:
https://ovirt_engine/ovirt-engine/api/vms/1b8208f7-db28-49d0-b620-4b2fcc923b25/diskattachments

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<disk_attachments>
  <disk_attachment href="/ovirt-engine/api/vms/1b8208f7-db28-49d0-b620-4b2fcc923b25/diskattachments/4a3da19a-cbe8-4121-b901-534e3e692953" id="4a3da19a-cbe8-4121-b901-534e3e692953">
    <active>true</active>
    <bootable>true</bootable>
    <interface>virtio</interface>
    <logical_name>/dev/vda</logical_name>
    <pass_discard>false</pass_discard>
    <read_only>false</read_only>
    <uses_scsi_reservation>false</uses_scsi_reservation>
    <disk href="/ovirt-engine/api/disks/4a3da19a-cbe8-4121-b901-534e3e692953" id="4a3da19a-cbe8-4121-b901-534e3e692953" />
    <vm href="/ovirt-engine/api/vms/1b8208f7-db28-49d0-b620-4b2fcc923b25" id="1b8208f7-db28-49d0-b620-4b2fcc923b25" />
  </disk_attachment>
</disk_attachments>

3. output for lsblk on the backup vm
NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                  11:0    1 1024M  0 rom
vda                 252:0    0   10G  0 disk
|-vda1              252:1    0  700M  0 part /boot
|-vda2              252:2    0    1G  0 part
`-vda3              252:3    0  8.3G  0 part
  `-VolGroup01-root 253:0    0  8.3G  0 lvm  /
vdb                 252:16   0   10G  0 disk
vdc                 252:32   0   10G  0 disk
|-vdc1              252:33   0  700M  0 part
|-vdc2              252:34   0    1G  0 part [SWAP]
`-vdc3              252:35   0  8.3G  0 part

4. The same scenario in 4.1 works fine (getting logical_name for each one of the disk attachments).

Comment 1 Tal Nisan 2018-04-23 10:44:44 UTC
Michal, why is logical name issues a storage bug?

Natalie, do you have guest tools installed on the guest? Did you let the guest run for a couple of minutes before checking?

Comment 2 Michal Skrivanek 2018-04-23 11:29:07 UTC
(In reply to Tal Nisan from comment #1)
> Michal, why is logical name issues a storage bug?

Well, https://ovirt.org/develop/release-management/features/storage/reportguestdiskslogicaldevicename/ is primarily a storage feature. If you see anything in guest agent to fix virt can help there, but it doesn't look like that.

Comment 3 Natalie Gavrielov 2018-04-23 11:50:25 UTC
(In reply to Tal Nisan from comment #1)
> 
> Natalie, do you have guest tools installed on the guest? Did you let the
> guest run for a couple of minutes before checking?

Yes, it's installed (and running).
We wait until the guest reports IP address, only then continue with the test.
I think it was enough time because it did report the other 2 disk attachment logical names, just not this one.

Comment 4 Tal Nisan 2018-04-23 13:19:00 UTC
Natalie, if the guest agent reports the logical name it should appear in the database, can you see it in the vm_device table?

Comment 5 Natalie Gavrielov 2018-04-24 12:11:16 UTC
(In reply to Tal Nisan from comment #4)
> Natalie, if the guest agent reports the logical name it should appear in the
> database, can you see it in the vm_device table?

From the db:
engine=# select * from vm_device where vm_id='84128620-a285-4ee0-8cac-dc757ef7eda1' and type='disk' and device='disk';
-[ RECORD 1 ]-----+-------------------------------------------------------------
device_id         | c91c8391-ca4d-4fc9-8790-cd2918326d06
vm_id             | 84128620-a285-4ee0-8cac-dc757ef7eda1
type              | disk
device            | disk
address           | {type=pci, slot=0x09, bus=0x00, domain=0x0000, function=0x0}
spec_params       | { }
is_managed        | t
is_plugged        | t
is_readonly       | f
_create_date      | 2018-04-22 10:50:32.074316+03
_update_date      | 2018-04-22 10:53:04.423711+03
alias             | ua-c91c8391-ca4d-4fc9-8790-cd2918326d06
custom_properties | { }
snapshot_id       | 964fd208-c591-457b-b5a4-720e09915f8b
logical_name      |
host_device       |
-[ RECORD 2 ]-----+-------------------------------------------------------------
device_id         | 11c7e386-2e41-40f2-9239-c59f8f03ebf4
vm_id             | 84128620-a285-4ee0-8cac-dc757ef7eda1
type              | disk
device            | disk
address           | {type=pci, slot=0x08, bus=0x00, domain=0x0000, function=0x0}
spec_params       | { }
is_managed        | t
is_plugged        | t
is_readonly       | f
_create_date      | 2018-04-22 10:50:33.650143+03
_update_date      | 2018-04-22 10:53:04.423711+03
alias             | ua-11c7e386-2e41-40f2-9239-c59f8f03ebf4
custom_properties | { }
snapshot_id       |
logical_name      | /dev/vdb
host_device       |
-[ RECORD 3 ]-----+-------------------------------------------------------------
device_id         | 2c94094c-a650-416c-86c8-e4f079ad0946
vm_id             | 84128620-a285-4ee0-8cac-dc757ef7eda1
type              | disk
device            | disk
address           | {type=pci, slot=0x07, bus=0x00, domain=0x0000, function=0x0}
spec_params       | { }
is_managed        | t
is_plugged        | t
is_readonly       | f
_create_date      | 2018-04-22 10:48:38.577049+03
_update_date      | 2018-04-22 10:53:04.423711+03
alias             | ua-2c94094c-a650-416c-86c8-e4f079ad0946
custom_properties | { }
snapshot_id       |
logical_name      | /dev/vda
host_device       |


From rest:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<disk_attachments>
  <disk_attachment href="/ovirt-engine/api/vms/84128620-a285-4ee0-8cac-dc757ef7eda1/diskattachments/2c94094c-a650-416c-86c8-e4f079ad0946" id="2c94094c-a650-416c-86c8-e4f079ad0946">
    <active>true</active>
    <bootable>true</bootable>
    <interface>virtio</interface>
    <logical_name>/dev/vda</logical_name>
    <pass_discard>false</pass_discard>
    <read_only>false</read_only>
    <uses_scsi_reservation>false</uses_scsi_reservation>
    <disk href="/ovirt-engine/api/disks/2c94094c-a650-416c-86c8-e4f079ad0946" id="2c94094c-a650-416c-86c8-e4f079ad0946" />
    <vm href="/ovirt-engine/api/vms/84128620-a285-4ee0-8cac-dc757ef7eda1" id="84128620-a285-4ee0-8cac-dc757ef7eda1" />
  </disk_attachment>
  <disk_attachment href="/ovirt-engine/api/vms/84128620-a285-4ee0-8cac-dc757ef7eda1/diskattachments/c91c8391-ca4d-4fc9-8790-cd2918326d06" id="c91c8391-ca4d-4fc9-8790-cd2918326d06">
    <active>true</active>
    <bootable>false</bootable>
    <interface>virtio</interface>
    <pass_discard>false</pass_discard>
    <read_only>false</read_only>
    <uses_scsi_reservation>false</uses_scsi_reservation>
    <disk href="/ovirt-engine/api/disks/c91c8391-ca4d-4fc9-8790-cd2918326d06" id="c91c8391-ca4d-4fc9-8790-cd2918326d06" />
    <vm href="/ovirt-engine/api/vms/84128620-a285-4ee0-8cac-dc757ef7eda1" id="84128620-a285-4ee0-8cac-dc757ef7eda1" />
  </disk_attachment>
  <disk_attachment href="/ovirt-engine/api/vms/84128620-a285-4ee0-8cac-dc757ef7eda1/diskattachments/11c7e386-2e41-40f2-9239-c59f8f03ebf4" id="11c7e386-2e41-40f2-9239-c59f8f03ebf4">
    <active>true</active>
    <bootable>false</bootable>
    <interface>virtio</interface>
    <logical_name>/dev/vdb</logical_name>
    <pass_discard>false</pass_discard>
    <read_only>false</read_only>
    <uses_scsi_reservation>false</uses_scsi_reservation>
    <disk href="/ovirt-engine/api/disks/11c7e386-2e41-40f2-9239-c59f8f03ebf4" id="11c7e386-2e41-40f2-9239-c59f8f03ebf4" />
    <vm href="/ovirt-engine/api/vms/84128620-a285-4ee0-8cac-dc757ef7eda1" id="84128620-a285-4ee0-8cac-dc757ef7eda1" />
  </disk_attachment>
</disk_attachments>

The logical name for disk attachment c91c8391-ca4d-4fc9-8790-cd2918326d06, snapshot id: 964fd208-c591-457b-b5a4-720e09915f8b is missing in both rest and db.

Comment 6 Tal Nisan 2018-04-25 11:19:52 UTC
Michal, the logical name is not updated in the database so it doesn't reach Engine at all so in that case I reckon it is guest tools related?

Comment 7 Michal Skrivanek 2018-04-25 11:52:22 UTC
(In reply to Tal Nisan from comment #6)
> Michal, the logical name is not updated in the database so it doesn't reach
> Engine at all so in that case I reckon it is guest tools related?

The field is generic, so indeed you can assume it was not reported in the first place. That's easy to check in guest agent logs - please attach. Also run /usr/share/ovirt-guest-agent/diskmapper ans past the output.
But I can't tell if it is an issue or not. Is it supposed to be reported? E.g. the mapping doesn't work for LUNs because they do not have a serial number

Comment 8 Natalie Gavrielov 2018-04-25 13:55:50 UTC
Created attachment 1426707 [details]
ovirt-guest-agent.log

[root@vm-83-58 ~]# /usr/share/ovirt-guest-agent/diskmapper
/dev/sr0|QEMU_DVD-ROM_QM00003
/dev/vda|2c94094c-a650-416c-8
/dev/vdb|11c7e386-2e41-40f2-9
/dev/vdc|c91c8391-ca4d-4fc9-8
[root@vm-83-58 ~]#

Comment 9 Michal Skrivanek 2018-04-25 15:14:01 UTC
hm, the log is not much helpful, but the diskmapper output is!
Seems it's reported fine.
The actual mapping happens in vdsm - but there doesnt' seem to be anything in the original logs. Natalie, can you please give access to the guest and host to confirm it's really mapped correctly in vdsm?

Comment 11 Elad 2018-05-03 09:16:44 UTC
Direct LUN's logical name is also missing from VM's diskattachments collection.

https://storage-ge-02.scl.lab.tlv.redhat.com/ovirt-engine/api/vms/2302a8a2-5736-4734-b99b-49e1e7c6ac9b/diskattachments


<disk_attachments>
<disk_attachment href="/ovirt-engine/api/vms/2302a8a2-5736-4734-b99b-49e1e7c6ac9b/diskattachments/3ee682b8-f89d-496c-b46b-03127e067e76" id="3ee682b8-f89d-496c-b46b-03127e067e76">
<active>true</active>
<bootable>true</bootable>
<interface>virtio</interface>
<logical_name>/dev/vda</logical_name>
<pass_discard>false</pass_discard>
<read_only>false</read_only>
<uses_scsi_reservation>false</uses_scsi_reservation>
<disk href="/ovirt-engine/api/disks/3ee682b8-f89d-496c-b46b-03127e067e76" id="3ee682b8-f89d-496c-b46b-03127e067e76"/>
<vm href="/ovirt-engine/api/vms/2302a8a2-5736-4734-b99b-49e1e7c6ac9b" id="2302a8a2-5736-4734-b99b-49e1e7c6ac9b"/>
</disk_attachment>
<disk_attachment href="/ovirt-engine/api/vms/2302a8a2-5736-4734-b99b-49e1e7c6ac9b/diskattachments/23a8e6b6-b661-463d-8c20-7fbb9280bd67" id="23a8e6b6-b661-463d-8c20-7fbb9280bd67">
<active>true</active>
<bootable>false</bootable>
<interface>virtio</interface>
<pass_discard>false</pass_discard>
<read_only>false</read_only>
<uses_scsi_reservation>false</uses_scsi_reservation>
<disk href="/ovirt-engine/api/disks/23a8e6b6-b661-463d-8c20-7fbb9280bd67" id="23a8e6b6-b661-463d-8c20-7fbb9280bd67"/>
<vm href="/ovirt-engine/api/vms/2302a8a2-5736-4734-b99b-49e1e7c6ac9b" id="2302a8a2-5736-4734-b99b-49e1e7c6ac9b"/>
</disk_attachment>
</disk_attachments>



/dev/vdb is missing from vm_devicde table for that VM:


engine=# select * from vm_device where vm_id='2302a8a2-5736-4734-b99b-49e1e7c6ac9b' and type='disk' and device='disk';                                                                                             
-[ RECORD 1 ]-----+-------------------------------------------------------------
device_id         | 23a8e6b6-b661-463d-8c20-7fbb9280bd67
vm_id             | 2302a8a2-5736-4734-b99b-49e1e7c6ac9b
type              | disk
device            | disk
address           | {type=pci, slot=0x0a, bus=0x00, domain=0x0000, function=0x0}
spec_params       | { }
is_managed        | t
is_plugged        | t
is_readonly       | f
_create_date      | 2018-05-03 11:59:59.653187+03
_update_date      | 2018-05-03 12:07:26.109184+03
alias             | virtio-disk1
custom_properties | { }
snapshot_id       | 
logical_name      | 
host_device       | 
-[ RECORD 2 ]-----+-------------------------------------------------------------
device_id         | 3ee682b8-f89d-496c-b46b-03127e067e76
vm_id             | 2302a8a2-5736-4734-b99b-49e1e7c6ac9b
type              | disk
device            | disk
address           | {type=pci, slot=0x07, bus=0x00, domain=0x0000, function=0x0}
spec_params       | { }
is_managed        | t
is_plugged        | t
is_readonly       | f
_create_date      | 2018-05-03 11:59:35.383759+03
_update_date      | 2018-05-03 12:07:26.109184+03
alias             | ua-3ee682b8-f89d-496c-b46b-03127e067e76
custom_properties | { }
snapshot_id       | 
logical_name      | /dev/vda
host_device       | 





But, it seems that the guest agent is not aware of this disk at all.

Notice vdb in lsblk output:


[root@localhost ~]# lsblk
NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                  11:0    1 1024M  0 rom  
vda                 252:0    0   10G  0 disk 
├─vda1              252:1    0  700M  0 part /boot
├─vda2              252:2    0    1G  0 part [SWAP]
└─vda3              252:3    0  8.3G  0 part 
  └─VolGroup01-root 253:0    0  8.3G  0 lvm  /
vdb                 252:16   0   50G  0 disk 



And it is missing from diskmapper output:


[root@localhost ~]# /usr/share/ovirt-guest-agent/diskmapper
/dev/sr0|QEMU_DVD-ROM_QM00003
/dev/vda|3ee682b8-f89d-496c-b


Michal - Does this mean we have a bug in the guest agent?

Comment 12 Elad 2018-06-10 09:35:25 UTC
This prevents us from testing the full backup restore api flow. 
Therefore I'm raising severity and marking as AutomatioBlocker.

Comment 13 Michal Skrivanek 2018-08-15 14:09:11 UTC
Direct LUNs do not have serial number and we do not map them in guest agent. It's a limitation of that feature (though not described on the upstream feature page it seems)

We would need to come up with a different solution for disk mapping for direct LUN disks, if it is important. 
Tal, moving back to you for your consideration.

Comment 14 Tal Nisan 2018-08-16 11:04:05 UTC
This requires design and all, we'll need some way to correlate (LUN ID parsed somehow?)
Yaniv, how important do you think it is as it is an RFE

Comment 15 Yaniv Lavi 2018-08-20 13:16:25 UTC
(In reply to Tal Nisan from comment #14)
> This requires design and all, we'll need some way to correlate (LUN ID
> parsed somehow?)
> Yaniv, how important do you think it is as it is an RFE

The conversation shifted from one bug to another.
Was the original regression on the snapshot disk resolved?

Comment 16 Elad 2018-08-20 13:23:54 UTC
No. 
We've worked around this to fetch the disk logical name from the guest instead from the API.

Comment 17 Yaniv Lavi 2018-09-05 09:45:36 UTC
Does mapping direct LUNs similar to how we would need it with Cinder volumes?
If so this is a RFE that should be scoped as part of that effort.

Comment 18 Tal Nisan 2018-09-16 12:16:28 UTC
(In reply to Yaniv Lavi from comment #17)
> Does mapping direct LUNs similar to how we would need it with Cinder volumes?
> If so this is a RFE that should be scoped as part of that effort.

Freddy, that's a part of your POC, can you answer please?

Comment 19 Fred Rolland 2018-10-07 08:51:35 UTC
(In reply to Yaniv Lavi from comment #17)
> Does mapping direct LUNs similar to how we would need it with Cinder volumes?
> If so this is a RFE that should be scoped as part of that effort.

I still don't know how the mapping will work with the Cinder Volumes so I cannot be sure on the answer to that.

Comment 20 Red Hat Bugzilla Rules Engine 2018-10-07 13:54:44 UTC
This bug report has Keywords: Regression or TestBlocker.
Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.

Comment 21 Elad 2018-12-06 13:57:40 UTC
Removing AutomationBlocker as we don't rely on logical_name from the API anymore.

Comment 22 Ryan Barry 2019-01-08 15:25:33 UTC
Since there's a workaround for this, and it's no longer blocking backup and restore testing, I'm removing blocker+ and deferring

Comment 23 Red Hat Bugzilla Rules Engine 2019-01-08 15:25:35 UTC
This bug report has Keywords: Regression or TestBlocker.
Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.

Comment 24 Ryan Barry 2019-01-21 13:34:22 UTC
Re-targeting, because these bugs either do not have blocker+, or do not have a patch posted

Comment 26 Michal Skrivanek 2020-03-19 13:13:22 UTC
dropping a regression and blocker flags, LUNs never had a mapping since they lack serial number.

Comment 27 Arik 2021-01-13 16:00:51 UTC
It is fixed for LUNs (bz 1859092)
Need to check if it still happens for snapshot disks.

Comment 28 Arik 2021-08-31 11:19:53 UTC
Avihai, can you please check if it still happens with snapshot disks (we suspect that the changes we've made for LUN may have fixed it already)?

Comment 29 Avihai 2021-09-02 07:19:33 UTC
(In reply to Arik from comment #28)
> Avihai, can you please check if it still happens with snapshot disks (we
> suspect that the changes we've made for LUN may have fixed it already)?

Sophie, please checkout if TestCase6169 results in the latest rhv-4.4.8(check a few runs back as well) to if this issue is still reproduced and replay here.
Thank you!

Comment 30 sshmulev 2021-09-02 12:46:31 UTC
(In reply to Avihai from comment #29)
> (In reply to Arik from comment #28)
> > Avihai, can you please check if it still happens with snapshot disks (we
> > suspect that the changes we've made for LUN may have fixed it already)?
> 
> Sophie, please checkout if TestCase6169 results in the latest
> rhv-4.4.8(check a few runs back as well) to if this issue is still
> reproduced and replay here.
> Thank you!

Tested on versions:
engine-4.4.8.5-0.4.el8ev
vdsm-4.40.80.6-1.el8ev

This issue doesn't reproduce here, tested it 5 times and it passed successfully in all of them.

Comment 31 Arik 2021-09-12 15:43:36 UTC
Thanks Sophie