Bug 1891534 - [v2v][VMware to CNV VM import API] After Ceph storage gets full, cancel VM import end up with v2v pod remaining in status Terminating
Summary: [v2v][VMware to CNV VM import API] After Ceph storage gets full, cancel VM im...
Keywords:
Status: CLOSED DUPLICATE of bug 1893528
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: V2V
Version: 2.5.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 2.5.1
Assignee: Sam Lucidi
QA Contact: Ilanit Stein
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-10-26 14:52 UTC by Ilanit Stein
Modified: 2020-11-04 15:23 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-04 15:23:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Ilanit Stein 2020-10-26 14:52:45 UTC
Description of problem:
VM import from VMwae to CNV via API a RHEL-7 VM with 100GB preallocated disk. 
at 37% it got stuck (probably to no space left), and remained in this state.

Cancelling the VM import from UI, is resulted with having the v2v pod remaining in status terminating, and having the binded PV and PVC remaining.   

Version-Release number of selected component (if applicable):
CNV-2.5: iib-22696 hco-v2.5.0-396

Expected results:
Cancel VM import should terminate the v2v pod and it's binded PVCc/PVc.

Comment 1 Fabien Dupont 2020-10-26 15:05:50 UTC
Can we get access to a reproducer? Our lab has too much storage for that to happen, and we need to investigate the pod/cluster state.

Comment 2 Sam Lucidi 2020-11-03 21:51:44 UTC
Hi Ilanit, as Fabien said could we have access to a reproducer environment?

Comment 3 Ilanit Stein 2020-11-04 13:33:05 UTC
I think this bug is a duplicate of bug 1893528.
Please contact me offline if a reproducer env is still needed.

Comment 4 Sam Lucidi 2020-11-04 15:23:41 UTC

*** This bug has been marked as a duplicate of bug 1893528 ***


Note You need to log in before you can comment on or make changes to this bug.