RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1731428 - NameError: global name 'BrokenPipeError' is not defined [rhel-7.7.z]
Summary: NameError: global name 'BrokenPipeError' is not defined [rhel-7.7.z]
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libguestfs
Version: 7.7
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Pino Toscano
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 1726168
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-19 11:53 UTC by RAD team bot copy to z-stream
Modified: 2019-08-23 10:37 UTC (History)
12 users (show)

Fixed In Version: libguestfs-1.40.2-5.el7_7.1
Doc Type: No Doc Update
Doc Text:
Clone Of: 1726168
Environment:
Last Closed: 2019-08-06 14:19:50 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2358 0 None None None 2019-08-06 14:19:51 UTC

Description RAD team bot copy to z-stream 2019-07-19 11:53:42 UTC
This bug has been copied from bug #1726168 and has been proposed to be backported to 7.7 z-stream (EUS).

Comment 6 liuzi 2019-07-24 09:28:11 UTC
Reproduce the bug with builds:
virt-v2v-1.40.2-5.el7.x86_64
libvirt-4.5.0-12.el7.x86_64
libguestfs-1.40.2-5.el7.x86_64
ovirt-imageio-common-1.5.1-0.el7ev.x86_64
ovirt-imageio-daemon-1.5.1-0.el7ev.noarch

Bug can be reproduced in non ovirt host.

Verify bug with builds:
virt-v2v-1.40.2-5.el7_7.1.x86_64
libvirt-4.5.0-23.el7.x86_64
libguestfs-1.40.2-5.el7_7.1.x86_64
ovirt-imageio-common-1.5.1-0.el7ev.x86_64
ovirt-imageio-daemon-1.5.1-0.el7ev.noarch

Steps:
1.Prepare a regular host(non ovirt host), Use virt-v2v to convert a guest to rhv with rhv-upload option:
# virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel7.5-x86_64 --password-file /tmp/passwd -on rhel7.5-log -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os p2v_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem -oo rhv-direct=true -oo rhv-cluster=p2v

2.Wait until the transfer starts, and note the ticket value in ovirt-imageio-daemon log in ovirt host which the vm will be running on:
2019-07-24 17:09:21,371 INFO    (Thread-848) [tickets] [local] ADD ticket={u'uuid': u'ac2568cd-f49a-4ee4-973b-f9f813a760e5', u'ops': [u'write'], u'url': u'file:///rhev/data-center/mnt/10.73.224.199:_home_p2v__data/fc770a83-690b-4bd4-ab29-0696ce431a13/images/c31e480c-8b16-4879-8178-b940fc8b756c/a102f9ba-4b52-4647-a297-3e20d198e783', u'sparse': True, u'timeout': 300, u'transfer_id': u'e85a3b2c-b6bf-4ae5-88c0-983b48d10ef1', u'size': 12884901888}

3. Make the ticket expire in one second by sending this request:
curl --unix-socket /run/vdsm/ovirt-imageio-daemon.sock -X PATCH -d '{"timeout": 1}' http://localhost/tickets/ac2568cd-f49a-4ee4-973b-f9f813a760e5

4.Check the error message shows in the host:
# virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel7.5-x86_64 --password-file /tmp/passwd -on rhel7.5-log -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os p2v_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem -oo rhv-direct=true -oo rhv-cluster=p2v
[   0.4] Opening the source -i libvirt -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel7.5-x86_64
[   2.5] Creating an overlay to protect the source from being modified
[   3.4] Opening the overlay
[  44.9] Inspecting the overlay
[ 258.0] Checking for sufficient free disk space in the guest
[ 258.0] Estimating space required on target for each disk
[ 258.0] Converting Red Hat Enterprise Linux Server 7.5 (Maipo) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[1683.8] Mapping filesystem data to avoid copying unused and blank areas
[1685.6] Closing the overlay
[1685.8] Assigning disks to buses
[1685.8] Checking if the guest needs BIOS or UEFI to boot
[1685.8] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os p2v_data
[1687.2] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.ypwJJO/nbdkit0.sock", "file.export": "/" } (raw)
nbdkit: python[1]: error: /var/tmp/v2v.OyHRdT/rhv-upload-plugin.py: pwrite: error: ['Traceback (most recent call last):\n', '  File "/var/tmp/v2v.OyHRdT/rhv-upload-plugin.py", line 407, in pwrite\n    (offset, count))\n', '  File "/var/tmp/v2v.OyHRdT/rhv-upload-plugin.py", line 355, in request_failed\n    raise RuntimeError("%s: %d %s: %r" % (msg, status, reason, body[:200]))\n', "RuntimeError: could not write sector offset 127401984 size 2097152: 403 Forbidden: 'You are not allowed to access this resource: Ticket ac2568cd-f49a-4ee4-973b-f9f813a760e5 expired'\n"]
qemu-img: error while writing sector 248832: Input/output error

nbdkit: python[1]: error: /var/tmp/v2v.OyHRdT/rhv-upload-plugin.py: flush: error: ['Traceback (most recent call last):\n', '  File "/var/tmp/v2v.OyHRdT/rhv-upload-plugin.py", line 510, in flush\n    request_failed(h, r, "could not flush")\n', '  File "/var/tmp/v2v.OyHRdT/rhv-upload-plugin.py", line 355, in request_failed\n    raise RuntimeError("%s: %d %s: %r" % (msg, status, reason, body[:200]))\n', "RuntimeError: could not flush: 403 Forbidden: 'You are not allowed to access this resource: Ticket ac2568cd-f49a-4ee4-973b-f9f813a760e5 expired'\n"]
nbdkit: python[1]: error: /var/tmp/v2v.OyHRdT/rhv-upload-plugin.py: flush: error: ['Traceback (most recent call last):\n', '  File "/var/tmp/v2v.OyHRdT/rhv-upload-plugin.py", line 510, in flush\n    request_failed(h, r, "could not flush")\n', '  File "/var/tmp/v2v.OyHRdT/rhv-upload-plugin.py", line 355, in request_failed\n    raise RuntimeError("%s: %d %s: %r" % (msg, status, reason, body[:200]))\n', "RuntimeError: could not flush: 403 Forbidden: 'You are not allowed to access this resource: Ticket ac2568cd-f49a-4ee4-973b-f9f813a760e5 expired'\n"]
virt-v2v: error: qemu-img command failed, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]
nbdkit: python[1]: error: /var/tmp/v2v.OyHRdT/rhv-upload-plugin.py: close: error: ['Traceback (most recent call last):\n', '  File "/var/tmp/v2v.OyHRdT/rhv-upload-plugin.py", line 530, in close\n    delete_disk_on_failure(h)\n', '  File "/var/tmp/v2v.OyHRdT/rhv-upload-plugin.py", line 516, in delete_disk_on_failure\n    disk_service.remove()\n', '  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 37334, in remove\n    self._internal_remove(headers, query, wait)\n', '  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 271, in _internal_remove\n    return future.wait() if wait else future\n', '  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 55, in wait\n    return self._code(response)\n', '  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 268, in callback\n    self._check_fault(response)\n', '  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 132, in _check_fault\n    self._raise_error(response, body)\n', '  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 118, in _raise_error\n    raise error\n', 'Error: Fault reason is "Operation Failed". Fault detail is "[Cannot remove Virtual Disk. Related operation is currently in progress. Please try again later.]". HTTP response code is 409.\n']

Result:There are no error info like: NameError: global name 'BrokenPipeError' is not defined.And the error info is correct when a ticket expires during import guest,so change the bug from ON_QA to VERIFIED.

Comment 8 errata-xmlrpc 2019-08-06 14:19:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2358


Note You need to log in before you can comment on or make changes to this bug.