Hide Forgot
Description of problem: I tried "pulp-admin repo list" and got an error of "Internal server error". The system was idle from last 2 days. I used the pulp setup on last friday ( 2 days before). After two days, when I tried to access the pulp-server with "pulp-admin repo list" command, it simply thrown an error: [root@dhcp193-23 e3b]# pulp-admin -u admin -p admin repo list error: operation failed: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>500 Internal Server Error</title> </head><body> <h1>Internal Server Error</h1> <p>The server encountered an internal error or misconfiguration and was unable to complete your request.</p> <p>Please contact the server administrator, root@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error.</p> <p>More information about this error may be available in the server error log.</p> <hr> <address>Apache/2.2.15 (Red Hat) Server at dhcp193-23.pnq.redhat.com Port 443</address> </body></html> Pulp-server logs says : ======================== [root@dhcp193-23 e3b]# tail -f /var/log/pulp/pulp.log transport=TCP host=dhcp193-23.pnq.redhat.com port=5672 cacert=/etc/pki/qpid/ca/ca.crt clientcert=/etc/pki/qpid/client/client.pem 2011-05-02 06:23:23,321 [INFO][MainThread] connect() @ broker.py:86 - connecting: {dhcp193-23.pnq.redhat.com:5672}: transport=TCP host=dhcp193-23.pnq.redhat.com port=5672 cacert=/etc/pki/qpid/ca/ca.crt clientcert=/etc/pki/qpid/client/client.pem 2011-05-02 06:23:24,334 [INFO][MainThread] connect() @ broker.py:86 - connecting: {dhcp193-23.pnq.redhat.com:5672}: transport=TCP host=dhcp193-23.pnq.redhat.com port=5672 cacert=/etc/pki/qpid/ca/ca.crt clientcert=/etc/pki/qpid/client/client.pem When i got this error, I checked all the services on pulp-server and all were running. [root@dhcp193-23 e3b]# service qpidd status qpidd (pid 27086) is running... [root@dhcp193-23 e3b]# service mongod status mongod (pid 27084) is running... [root@dhcp193-23 e3b]# service httpd status httpd (pid 27114) is running... Version-Release number of selected component (if applicable): pulp 0.171 rhui-tools 2.0.22 How reproducible: I got this issue second time. Earlier I got this on build pulp 0.168. But at that time I simply restarted the pulp-server. Steps to Reproduce: 1. 2. 3. Actual results: pulp-admin tool throwing "internal server error" Expected results: Pulp-admin tool should work even if the system is unused from more than 2 days. Additional info: Note: I'm logging this defect becoz I guess this issue is with apache, especially when the system is in idle statefrom more than 2 days. Here is what I got from /var/log/httpd/error_log file: [Sun May 01 03:42:10 2011] [warn] mod_wsgi: Compiled for Python/2.6.2. [Sun May 01 03:42:10 2011] [warn] mod_wsgi: Runtime using Python/2.6.5. [Sun May 01 03:42:10 2011] [notice] Apache/2.2.15 (Unix) DAV/2 PHP/5.3.2 mod_python/3.3.1 Python/2.6. 5 mod_ssl/2.2.15 OpenSSL/1.0.0-fips mod_wsgi/3.2 configured -- resuming normal operations [Sun May 01 03:42:18 2011] [notice] child pid 2676 exit signal Segmentation fault (11) [Sun May 01 03:42:19 2011] [notice] child pid 2748 exit signal Segmentation fault (11) [Sun May 01 03:42:20 2011] [notice] child pid 2754 exit signal Segmentation fault (11) [Sun May 01 03:42:21 2011] [notice] child pid 2760 exit signal Segmentation fault (11) [Sun May 01 03:42:22 2011] [notice] child pid 2766 exit signal Segmentation fault (11) [Sun May 01 03:42:23 2011] [notice] child pid 2772 exit signal Segmentation fault (11) [Sun May 01 03:42:24 2011] [notice] child pid 2778 exit signal Segmentation fault (11) [Sun May 01 03:42:25 2011] [notice] child pid 2784 exit signal Segmentation fault (11) ) : Python/ceval.c:2776: PyEval_EvalCodeEx: Assertion `tstate != ((void *)0)' failed. [Sun May 01 03:42:27 2011] [notice] child pid 2790 exit signal Aborted (6) [Sun May 01 03:42:28 2011] [notice] child pid 2796 exit signal Segmentation fault (11) [Sun May 01 03:42:29 2011] [notice] child pid 2802 exit signal Segmentation fault (11) [Sun May 01 03:42:30 2011] [notice] child pid 2808 exit signal Segmentation fault (11) [Sun May 01 03:42:31 2011] [notice] child pid 2814 exit signal Segmentation fault (11) ) : Python/ceval.c:2776: PyEval_EvalCodeEx: Assertion `tstate != ((void *)0)' failed. [Sun May 01 03:42:32 2011] [notice] child pid 2820 exit signal Aborted (6) [Sun May 01 03:42:33 2011] [notice] child pid 2826 exit signal Segmentation fault (11) [Sun May 01 03:42:34 2011] [notice] child pid 2832 exit signal Segmentation fault (11) [Sun May 01 03:42:35 2011] [notice] child pid 2838 exit signal Segmentation fault (11) [Sun May 01 03:42:36 2011] [notice] child pid 2844 exit signal Segmentation fault (11) [Sun May 01 03:42:37 2011] [notice] child pid 2850 exit signal Segmentation fault (11) [Sun May 01 03:42:38 2011] [notice] child pid 2857 exit signal Segmentation fault (11) ) : Python/ceval.c:2776: PyEval_EvalCodeEx: Assertion `tstate != ((void *)0)' failed. [Sun May 01 03:42:39 2011] [notice] child pid 2863 exit signal Aborted (6) [Sun May 01 03:42:40 2011] [notice] child pid 2869 exit signal Segmentation fault (11) [Sun May 01 03:42:41 2011] [notice] child pid 2875 exit signal Segmentation fault (11) [Sun May 01 03:42:42 2011] [notice] child pid 2881 exit signal Segmentation fault (11) [Sun May 01 03:42:43 2011] [notice] child pid 2887 exit signal Segmentation fault (11) [Sun May 01 03:42:44 2011] [notice] child pid 2893 exit signal Segmentation fault (11) ----<snippet>---- When I restarted httpd daemon, server came up. [root@dhcp193-23 e3b]# service httpd restart Stopping httpd: [ OK ] Starting httpd: [ OK ] [root@dhcp193-23 e3b]# pulp-admin -u admin -p admin repo list +------------------------------------------+ List of Available Repositories +------------------------------------------+ Id repo1 Name custim_repo1 FeedURL None FeedType None Feed Certs No Consumer Certs Yes Arch noarch Sync Schedule None Packages 16 Files 0 Distributions None Publish True Clones [] Groups [u'custom'] Filters [] Notes {u'entitlement-path': u'/protected/$basearch/os'}
*** This bug has been marked as a duplicate of bug 696669 ***