Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1755460

Summary: [v2v] v2v VMware->RHV fail on qemu-img: error while writing sector ...: Input/output error
Product: Red Hat Enterprise Linux 7 Reporter: Ilanit Stein <istein>
Component: libguestfsAssignee: Richard W.M. Jones <rjones>
Status: CLOSED DUPLICATE QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 7.7CC: michal.skrivanek, nsoffer, ptoscano, rbarry
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-10-03 10:07:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Virt RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
journal.log from Sep 25, 2019 none

Description Ilanit Stein 2019-09-25 14:27:54 UTC
Description of problem:
When running v2v from CFME, VMware to RHV, for a RHEL7.7 VM, with 1TB disk, 70% usage, VDDK,
the disk copy failed migrating 1.03TB of 1.10TB.

v2v-import log error:
qemu-img: error while writing sector 1392087040: Input/output error 

daemon.log error:
2019-09-25 02:04:50,588 INFO    (Thread-5) [http] CLOSE client=local [connection=11727.200110/1, dispatch=4191.419305/727430, operation=3758.203536/727430, read=534.690564/515292, write=1949.710381/515292, zero=
80.659942/212138] 

nsoffer: "Then the connection was closed without flushing. It means rhv-upload-plugin failed or was killed."

2019-09-25 02:05:09,053 INFO    (Thread-1364) [http] OPEN client=local 
2019-09-25 02:05:09,054 INFO    (Thread-1364) [tickets] [local] REMOVE ticket=68ea2ffa-8370-4727-97f1-2d138992de64 

nsoffer: "This is engine terminating the transfer."

Version-Release number of selected component (if applicable):
CFME-5.11.0.24
RHV-4.3.5.4-0.1.el7
virt-v2v-1.40.2-5.el7_7.1.x86_64 
vdsm-4.30.30-1.el7ev.x86_64 
nbdkit-1.8.0-1.el7.x86_64 
libvirt-daemon-driver-qemu-4.5.0-23.el7_7.1.x86_64 
qemu-img-rhev-2.12.0-33.el7_7.4.x86_64 
qemu-kvm-common-rhev-2.12.0-33.el7_7.4.x86_64 
qemu-kvm-rhev-2.12.0-33.el7_7.4.x86_64

How reproducible:
Not sure. Might be that in other scenarios it can happen too.

Comment 3 Nir Soffer 2019-09-26 22:24:39 UTC
(In reply to Ilanit Stein from comment #0)
Comment 0 does not include enough info, let me add the missing info, taken from
v2v-devel thread.


v2v-import log error:

nbdkit: python[1]: debug: pwrite count=2097152 offset=712746467328 fua=0
nbdkit: debug: python: unload
nbdkit: debug: vddk: unload
nbdkit: debug: VDDK call: VixDiskLib_Exit ()
nbdkit: debug: VixDiskLib: VixDiskLib_Exit: Unmatched Init calls so far: 1.
qemu-img: error while writing sector 1392087040: Input/output error
nbdkit: debug: VixDiskLibVim: VixDiskLibVim_Exit: Clean up.

This log fragment does not show any useful info.


ovirt-imageio-daemon.log:

These are the last messages from daemon.log:
(all previous messages seem to be normal read/write messages).

2019-09-25 02:04:50,473 INFO    (Thread-5) [images] [local] WRITE size=2097152 offset=712742273024 flush=False ticket=68ea2ffa-8370-4727-97f1-2d138992de64
2019-09-25 02:04:50,512 INFO    (Thread-5) [images] [local] WRITE size=2097152 offset=712744370176 flush=False ticket=68ea2ffa-8370-4727-97f1-2d138992de64
2019-09-25 02:04:50,548 INFO    (Thread-5) [images] [local] WRITE size=2097152 offset=712746467328 flush=False ticket=68ea2ffa-8370-4727-97f1-2d138992de64
2019-09-25 02:04:50,588 INFO    (Thread-5) [http] CLOSE client=local [connection=11727.200110/1, dispatch=4191.419305/727430, operation=3758.203536/727430, read=534.690564/515292, write=1949.710381/515292, zero=
80.659942/212138]
2019-09-25 02:05:09,053 INFO    (Thread-1364) [http] OPEN client=local
2019-09-25 02:05:09,054 INFO    (Thread-1364) [tickets] [local] REMOVE ticket=68ea2ffa-8370-4727-97f1-2d138992de64
2019-09-25 02:05:09,055 INFO    (Thread-1364) [http] CLOSE client=local [connection=0.001648/1, dispatch=0.000488/1]

This log show normal upload ending at abnormally (client closed the connection without flushing).
The client had some issue or crashed.

There are no logs attached to the bug, so we don't know anything else.


This does not look like ovirt-imageio-daemon issue or RHV issue. Please move
the bug to virt-v2v and attach complete logs.

Comment 6 Michal Skrivanek 2019-09-27 07:07:09 UTC
Ilanit, can you please check if this failure is from the same run when CFME killed conversions? If yes please close this and track an IMS fix instead (to fix the polling frequency inside CFME IMS code)

Comment 9 Richard W.M. Jones 2019-10-02 14:12:39 UTC
Ilanit, could you collect the journal from the machine running virt-v2v?
This does look quite similar to the cezdata bug from here, but I can't
prove that without the journal.

# journalctl -S "2019-09-25" -U "2019-09-26" > journal.log

Comment 10 Ilanit Stein 2019-10-02 14:59:05 UTC
Created attachment 1621888 [details]
journal.log from Sep 25, 2019

Comment 11 Richard W.M. Jones 2019-10-02 15:37:39 UTC
Thanks Ilanit for getting back so quickly.

This is the same as the cezdata bug, because in the journal we can
see that CFME is logging in and sending a kill signal to virt-v2v (or
perhaps to a related process):

Sep 25 02:04:49 leopard03.qa.lab.tlv.redhat.com sshd[36961]: Accepted publickey 
for root from 10.35.161.44 port 44844 ssh2: RSA SHA256:g5Bu+bCcBH/JcIKTqU/bwBNOk
ydRI04Stgrxk2CygOY
Sep 25 02:04:50 leopard03.qa.lab.tlv.redhat.com systemd-logind[3281]: New sessio
n 1977 of user root.
Sep 25 02:04:50 leopard03.qa.lab.tlv.redhat.com systemd[1]: Started Session 1977
 of user root.
Sep 25 02:04:50 leopard03.qa.lab.tlv.redhat.com sshd[36961]: pam_unix(sshd:session): session opened for user root by (uid=0)
Sep 25 02:04:50 leopard03.qa.lab.tlv.redhat.com sudo[36981]:     root : TTY=unknown ; PWD=/root ; USER=root ; COMMAND=/bin/kill -s TERM 14115

and we can see from the wrapper log that virt-v2v is reported dead about a second later:

2019-09-25 02:04:51,424:INFO: virt-v2v terminated with return code 15 (virt-v2v-wrapper:1914)

Comment 12 Ilanit Stein 2019-10-03 10:07:20 UTC

*** This bug has been marked as a duplicate of bug 1755632 ***