Bug 213498

Summary: cups segmentation fault on lpq cancel job
Product: [Fedora] Fedora Reporter: Michael Sellers <msellers>
Component: cupsAssignee: Tim Waugh <twaugh>
Status: CLOSED ERRATA QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: 6CC: ajschult784, keith.holmes, mra
Target Milestone: ---   
Target Release: ---   
Hardware: i386   
OS: Linux   
Whiteboard:
Fixed In Version: 1.2.5-2.fc6.8 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2006-11-09 16:04:11 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 207681    

Description Michael Sellers 2006-11-01 17:42:28 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.7) Gecko/20061027 Fedora/1.5.0.7-8.fc6 Firefox/1.5.0.7

Description of problem:
Cups will only print the first item in queue, and crashes when cancel job# is used from lpq.

Version-Release number of selected component (if applicable):
cups-1.2.4-9

How reproducible:
Always


Steps to Reproduce:
1.Load Cups
2.Items exist in print queue
3.lpq
4.cancel job#

Actual Results:
cups deamon crashes

Expected Results:
job cancelled

Additional info:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1208826160 (LWP 4048)]
0x005aa2ad in cupsdSaveJob (job=0x82e6430) at job.c:1396
1396      if (strncmp(job->scon, UNKNOWN_SL, strlen(UNKNOWN_SL)) != 0)
(gdb) p job
$1 = (cupsd_job_t *) 0x82e6430
(gdb) p job->scan
There is no member named scan.
(gdb) p job->scon
$2 = 0x0
(gdb) bt
#0  0x005aa2ad in cupsdSaveJob (job=0x82e6430) at job.c:1396
#1  0x005abaaf in cupsdCancelJob (job=0x82e6430, purge=0, newstate=IPP_JOB_CANCELED) at job.c:274
#2  0x0059f9af in cancel_job (con=0x82f8548, uri=0x82fbe90) at ipp.c:3291
#3  0x005a8267 in cupsdProcessIPPRequest (con=0x82f8548) at ipp.c:486
#4  0x0058628b in cupsdReadClient (con=0x82f8548) at client.c:2000
#5  0x0059572e in main (argc=2, argv=0xbfd59e24) at main.c:938
(gdb)

Comment 1 Tim Waugh 2006-11-01 18:04:36 UTC
CC: Matt
Matt, shouldn't that whole section in cupsdSaveJob be wrapped in 'if
(is_lspp_config())' anyway?  But it looks like there is a missing '!job->scon
||' as well -- what do you think?

Comment 2 Tim Waugh 2006-11-01 18:19:15 UTC
My proposed minimal fix:

--- cups-1.2.4/scheduler/job.c  2006-10-16 15:30:02.000000000 -0400
+++ cups-1.2.5/scheduler/job.c  2006-11-01 18:16:18.000000000 +0000
@@ -1395,7 +1395,7 @@
   fchown(cupsFileNumber(fp), RunUser, Group);
 
 #ifdef WITH_LSPP
-  if (strncmp(job->scon, UNKNOWN_SL, strlen(UNKNOWN_SL)) != 0)
+  if (job->scon && strncmp(job->scon, UNKNOWN_SL, strlen(UNKNOWN_SL)) != 0)
   {
     if (getfilecon(filename, &spoolcon) == -1)
     {

but I expect that whole 'if' clause should be in an 'if (is_lspp_config())'
conditional as well.

Comment 3 Matt Anderson 2006-11-01 18:56:30 UTC
I think your proposed minimal fix is best.  Since this is occuring in
cupsdSaveJob() we need it to honor the job->scon if it exists, regardless of how
the server is configured at the moment.

Comment 4 Tim Waugh 2006-11-02 12:18:35 UTC
Fixed in CVS for FC-6 and devel.

Comment 5 Tim Waugh 2006-11-03 09:04:32 UTC
Please try this test update:

https://www.redhat.com/archives/fedora-test-list/2006-November/msg00043.html

You can do this with:

  yum --enablerepo=updates-testing update 'cups*'


Comment 6 Michael Sellers 2006-11-03 12:19:20 UTC
Stopping cups:                                             [FAILED]
Starting cups:                                             [  OK  ]
[root@halogen init.d]# lpq
toby is ready and printing
Rank    Owner   Job     File(s)                         Total Size
active  mseller 10      evince-print                    250880 bytes
[root@halogen init.d]# cancel 10
[root@halogen init.d]# lpq
toby is ready
no entries
[root@halogen init.d]# ./cups status
cupsd (pid 32332) is running...

Comment 7 Tim Waugh 2006-11-06 10:38:33 UTC
*** Bug 213735 has been marked as a duplicate of this bug. ***