Bug 965386 - Taskomatic ERROR: [Errno 32] Broken Pipe
Summary: Taskomatic ERROR: [Errno 32] Broken Pipe
Keywords:
Status: CLOSED DUPLICATE of bug 820612
Alias: None
Product: Spacewalk
Classification: Community
Component: Server
Version: 1.9
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Michael Mráka
QA Contact: Red Hat Satellite QA List
URL:
Whiteboard:
Depends On:
Blocks: space27
TreeView+ depends on / blocked
 
Reported: 2013-05-21 06:33 UTC by Stefan Bluhm
Modified: 2017-09-28 18:08 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-09-12 13:52:48 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 516556 0 low CLOSED Repo sync hangs after about 2 hours 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 889885 0 unspecified CLOSED Fedora Reposnyc failure 2021-02-22 00:41:40 UTC

Description Stefan Bluhm 2013-05-21 06:33:45 UTC
Server: Centos 6.4 64 bit / Spacewalk 1.9 VMWare

When syncing large repositories (Fedora 18/Centos 6 base), I always get this message in the repo log:

[Errno 32] Broken pipe
ERROR: [Errno 32] Broken pipe

Sometimes taskomatic also stops working and needs a restart/start.



I am starting the sync via the web interface.


Additional info:
I am running Spacewalk on a VM (3GB Memory, PLSQL, slow HDD)




Taskometer Log 
INFO   | jvm 4    | 2013/05/21 08:26:22 | May 21, 2013 8:26:21 AM com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector run
INFO   | jvm 4    | 2013/05/21 08:26:22 | WARNING: com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector@60f00e0f -- APPARENT DEADLOCK!!! Creating emergency threads for unassigned pending tasks!
INFO   | jvm 4    | 2013/05/21 08:26:32 | May 21, 2013 8:26:32 AM com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector run
INFO   | jvm 4    | 2013/05/21 08:26:32 | WARNING: com.mchange.v2.async.ThreadPoolAsynchronousRunner$DeadlockDetector@60f00e0f -- APPARENT DEADLOCK!!! Complete Status:
INFO   | jvm 4    | 2013/05/21 08:26:32 |       Managed Threads: 3
INFO   | jvm 4    | 2013/05/21 08:26:32 |       Active Threads: 0
INFO   | jvm 4    | 2013/05/21 08:26:32 |       Active Tasks:
INFO   | jvm 4    | 2013/05/21 08:26:32 |       Pending Tasks:
INFO   | jvm 4    | 2013/05/21 08:26:32 |               com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@1d05c9a1
INFO   | jvm 4    | 2013/05/21 08:26:32 |               com.mchange.v2.resourcepool.BasicResourcePool$1RefurbishCheckinResourceTask@66f877e9
INFO   | jvm 4    | 2013/05/21 08:26:32 | Pool thread stack traces:
INFO   | jvm 4    | 2013/05/21 08:26:32 |       Thread[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1,5,main]
INFO   | jvm 4    | 2013/05/21 08:26:32 |               java.lang.Object.wait(Native Method)
INFO   | jvm 4    | 2013/05/21 08:26:32 |               com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:534)
INFO   | jvm 4    | 2013/05/21 08:26:32 |       Thread[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#0,5,main]
INFO   | jvm 4    | 2013/05/21 08:26:32 |               java.lang.Object.wait(Native Method)
INFO   | jvm 4    | 2013/05/21 08:26:32 |               com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:534)
INFO   | jvm 4    | 2013/05/21 08:26:32 |       Thread[com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2,5,main]
INFO   | jvm 4    | 2013/05/21 08:26:32 |               java.lang.Object.wait(Native Method)
INFO   | jvm 4    | 2013/05/21 08:26:32 |               com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:534)
INFO   | jvm 4    | 2013/05/21 08:26:32 |
INFO   | jvm 4    | 2013/05/21 08:26:32 |

Comment 1 Michael Mráka 2013-05-22 06:21:20 UTC
Are there any errors in /var/log/rhn/reposync/*.log?

Comment 2 Stefan Bluhm 2013-05-22 10:29:23 UTC
Hi Michael,

the error was just the one mentioned above:
[Errno 32] Broken pipe
ERROR: [Errno 32] Broken pipe

Here the last few lines of a few:


[root@control reposync]# cat fedora18-x86_64-2013.05.20-17\:21\:40.log
Sync started: Mon May 20 17:21:40 2013
['/usr/bin/spacewalk-repo-sync', '--channel', 'fedora18-x86_64', '--type', 'yum']
Repo URL: https://mirrors.fedoraproject.org/metalink?repo=fedora-18&arch=x86_64
Packages in repo:             33868
Packages already synced:          0
ERROR: [Errno 32] Broken pipe

[root@control reposync]# tail centos6-x86_64-2013.05.21-13\:54\:58.log
1114/6381 : python-ipaddr-2.1.9-3.el6-0.noarch
1115/6381 : libreadline-java-javadoc-0.8.0-24.3.el6-0.x86_64
1116/6381 : bison-2.4.1-5.el6-0.x86_64
1117/6381 : libevent-devel-1.4.13-4.el6-0.i686
1118/6381 : atmel-firmware-1.3-7.el6-0.noarch
1119/6381 : mythes-ru-0.20070613-3.1.el6-0.noarch
1120/6381 : vigra-devel-1.6.0-2.1.el6-0.i686
1121/6381 : meanwhile-devel-1.1.0-3.el6-0.x86_64
[Errno 32] Broken pipe
ERROR: [Errno 32] Broken pipe

[root@control reposync]# tail fedora18-i386-updates-2013.05.16-23\:33\:36.log
failure: texlive-cmbright-svn21107.8.1-20.fc18.noarch.rpm from fedora18-i386-updates: [Errno 256] No more mirrors to try.
3781/12846 : uim-skk-1.8.5-2.fc18-0.i686
failure: uim-skk-1.8.5-2.fc18.i686.rpm from fedora18-i386-updates: [Errno 256] No more mirrors to try.
3782/12846 : gvfs-smb-1.14.2-3.fc18-0.i686
failure: gvfs-smb-1.14.2-3.fc18.i686.rpm from fedora18-i386-updates: [Errno 256] No more mirrors to try.
3783/12846 : libreoffice-ogltrans-3.6.6.2-5.fc18-1.i686
failure: libreoffice-ogltrans-3.6.6.2-5.fc18.i686.rpm from fedora18-i386-updates: [Errno 256] No more mirrors to try.
3784/12846 : texlive-uptex-bin-svn26912.0-20.20130321_r29448.fc18-2.i686
failure: texlive-uptex-bin-svn26912.0-20.20130321_r29448.fc18.i686.rpm from fedora18-i386-updates: [Errno 256] No more mirrors to try.
ERROR: [Errno 32] Broken pipe

Comment 3 Stefan Bluhm 2013-05-28 04:45:19 UTC
Could this have something to do with multiple syncs happening at the same time? I noticed from the logs, that when I restart Taskomatic, non-executed scheduled syncs also start running at some point.

I have disabled the scheduled job and started the crashed Taskomatic again and now the download seems to be running for a lot longer (at least it managed over night).

Comment 4 Stefan Bluhm 2013-06-12 13:09:11 UTC
This happens when the HDD is a network drive. I have moved the HDD to the local machine and it works fine. So the error probably shows up when the HDD transfer is too slow

Comment 5 Michael Mráka 2013-09-12 13:52:48 UTC
This seem to be caused by incorrect handling of stdout and stderr of spacewalk-repo-sync.

I'm closing it as duplicate of bug 820612.

If you disagree please reopen the bug.

*** This bug has been marked as a duplicate of bug 820612 ***

Comment 6 Eric Herget 2017-09-28 18:08:33 UTC
This BZ closed some time during 2.5, 2.6 or 2.7.  Adding to 2.7 tracking bug.


Note You need to log in before you can comment on or make changes to this bug.