Description of problem: often when i print multiple sequential jobs, only the first one makes it through. seems that if i forget and reboot that the old jobs come through then. hoping this crash might be the issue so can get it fixed. have had this in fc19 as well as fc20. Version-Release number of selected component: ghostscript-9.10-5.fc20 Additional info: reporter: libreport-2.1.10 backtrace_rating: 4 cmdline: gs -q -dNOPAUSE -dBATCH -dSAFER -sDEVICE=ps2write -sOUTPUTFILE=%stdout -dLanguageLevel=3 -r1200 -dCompressFonts=false -dNoT3CCITT -dNOINTERPOLATE -c 'save pop' -f /var/spool/cups/tmp/cupsscNQCL crash_function: gx_device_enum_ptr executable: /usr/bin/gs kernel: 3.13.0-0.rc6.git0.1.fc21.x86_64 runlevel: N 5 type: CCpp uid: 4 Truncated backtrace: Thread no. 1 (10 frames) #0 gx_device_enum_ptr at base/gsdevice.c:97 #1 pdf14_device_enum_ptrs at base/gdevp14.c:496 #2 gc_trace at psi/igc.c:857 #3 gs_gc_reclaim at psi/igc.c:335 #4 context_reclaim at psi/zcontext.c:280 #5 gs_vmreclaim at psi/ireclaim.c:155 #6 ireclaim at psi/ireclaim.c:77 #7 interp_reclaim at psi/interp.c:441 #8 interp at psi/interp.c:1713 #9 gs_call_interp at psi/interp.c:510
Created attachment 847387 [details] File: backtrace
Created attachment 847388 [details] File: cgroup
Created attachment 847389 [details] File: core_backtrace
Created attachment 847390 [details] File: dso_list
Created attachment 847391 [details] File: environ
Created attachment 847392 [details] File: exploitable
Created attachment 847393 [details] File: limits
Created attachment 847394 [details] File: maps
Created attachment 847395 [details] File: open_fds
Created attachment 847396 [details] File: proc_pid_status
Created attachment 847397 [details] File: var_log_messages
no ghostscript updates in koji.fedoraproject.org but tried to downgrade and also got this running ghostscript-9.10-4.fc20.x86_64 --- Running report_uReport --- This problem has already been reported. https://retrace.fedoraproject.org/faf/reports/254899/ https://bugzilla.redhat.com/show_bug.cgi?id=1050607
Are you able to reproduce this by trying to print the same job again?
yes, i had at least one webpage printjob (dont remember which) that caused the ghostscript crash each time i tried to print it. there is some randomness because i always seem to catch it mid-project so havent had time to track it well so not sure of cause so i dont have the link but if i figure out an example of one that causes the issue then i will add it here. (** i might be tracking multiple bugs in this report so may have to split but hoping same bug because i havent figured what causes the queue clog but both happened at same time in this instance. there is a little randomness to the held/held_jobs_printing_after_a_reboot. 'systemctl restart cups.service' didnt seem to effect anything but a reboot usually did. sometimes a string of jobs seem to print fine then one just stops unepectedly and clogs up the queue and i find out later that i have a queue of unprinted jobs. sometimes the held job claims that the printer is offline but it can be pinged and seems to be awake if look at its controlpanel and can be printed to from other hosts.)
Thanks. An example job that triggers it would be very useful. To aid debugging you can turn on preservation of jobs with "cupsctl PreserveJobFiles=Yes". Then next time a job fails you'll be able to attach the /var/spool/cups/d* file corresponding to the failed job. I'll leave this as 'needing info' for now.
Created attachment 909876 [details] clog job had a clog. i didnt find any '/var/spool/cups/d*' jobs. there were jobs in /var/spool/cups but they got flushed before i could get a look at them because i had to reboot which successfully restarted printing i am not sure if it helps but the attached file is one that was in '/var/spool/cups/tmp' and that i had been looking at before the reboot. the attached file didnt look remarkable so am wondering if the job before it might have caused the clog but then been cleared before the clog took effect? will see if i can get another one.
regarding comment#16 the clog that happened was characteristic but i just checked abrt and there is no corresponding abrt entry so maybe its a different error and this could be closed? i havent seen the abrt for some time but the clog continues periodically. maybe related to my speed of queing a series of print jobs instead of a print job problem?
What does 'rpm -q ghostscript' say? I think a recent update might fix this. Before trying an updated ghostscript package, could you please try running this?: gs -q -dNOPAUSE -dBATCH -dSAFER -sDEVICE=ps2write \ -sOUTPUTFILE=/dev/null -dLanguageLevel=3 -r1200 \ -dCompressFonts=false -dNoT3CCITT -dNOINTERPOLATE \ -c 'save pop' -f cupsS4pXj6-job-that-clogged-que Do you get any output from that command?
Also, is there anything in /var/log/cups/error_log from the time you found the clogged job?
Created attachment 909937 [details] snippet of var log messages near time of print que clog during rapid addition of jobs maybe the original bug issue of abrt is fixed and unrelated to the clog because it looks like at a time near this clog that avahi was arguing with firefox? maybe adding about 1 printjob/second to que on slow machine confused avahi or firefox and i need to open a different bug and/or work slower.
Created attachment 910001 [details] part of the var log cups error_log for comment#19 but also has old gs snipet
regarding comment#18 $ rpm -q ghostscript ghostscript-9.14-3.fc20.x86_64 running the "gs -q -dNOPAUSE -dBATCH -dSAFER -sDEVICE=ps2write -sOUTPUTFILE=/dev/null -dLanguageLevel=3 -r1200 -dCompressFonts=false -dNoT3CCITT -dNOINTERPOLATE -c 'save pop' -f cupsS4pXj6-job-that-clogged-que" produced no output and didnt seem to show up in journalctl or /var/log/cups/error_log
It looks like this is the problem now: D [14/Jun/2014:05:01:18 -0400] [Job 115] Connecting to <removed-ip6>:631 I [14/Jun/2014:05:01:18 -0400] [Job 115] Connecting to printer. [...] D [14/Jun/2014:05:01:18 -0400] [Job 115] Connection error: Invalid argument E [14/Jun/2014:05:01:18 -0400] [Job 115] The printer is not responding. Job 115 is for the same destination as (successful) job 114, with device URI dnssd://Lexmark%20MS310dn._ipp._tcp.local/. So it looks like dnssd is deciding on back IPv6 addressses sometimes. Would it be possible to see what the IPv6 address it tried to connect to was? Also useful would be the output of this command, as root: DEVICE_URI=dnssd://Lexmark%20MS310dn._ipp._tcp.local/ \ /usr/lib/cups/backend/dnssd 1 tim '' 1 '' </dev/null
# DEVICE_URI=dnssd://Lexmark%20MS310dn._ipp._tcp.local/ \ > /usr/lib/cups/backend/dnssd 1 tim '' 1 '' </dev/null DEBUG: Resolving "Lexmark MS310dn._ipp._tcp.local"... STATE: +connecting-to-device DEBUG: Resolving "Lexmark MS310dn", regtype="_ipp._tcp", domain="local."... DEBUG: Resolved as "ipp://<removed-ipv4-address-of-printer>:631/ipp/print"... STATE: -connecting-to-device,offline-report DEBUG: Executing backend "/usr/lib/cups/backend/ipp"... DEBUG: Sending stdin for job... DEBUG: update_reasons(attr=0(), s="+connecting-to-device") DEBUG2: op='+', new_reasons=1, state_reasons=0 STATE: +connecting-to-device DEBUG: Looking up "<removed-ipv4-address-of-printer>"... DEBUG: backendWaitLoop(snmp_fd=3, addr=0x7f237fa6cfa8, side_cb=0x7f237f4333b0) (the backendWaitLoop line seems to be the end of the output, only waited about 3minutes after that line though)
If you run that command several times, do you always get the same output? Does it ever include an IPv6 address?
(Changing the component and description to describe the IPv6 issue for now.)
This message is a reminder that Fedora 20 is nearing its end of life. Approximately 4 (four) weeks from now Fedora will stop maintaining and issuing updates for Fedora 20. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '20'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 20 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
Fedora 20 changed to end-of-life (EOL) status on 2015-06-23. Fedora 20 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. If you experience problems, please add a comment to this bug. Thank you for reporting this bug and we are sorry it could not be fixed.