Bug 1897023

Summary: Cups service restart sequence during upgrade incorrect
Product: [Fedora] Fedora Reporter: Ryan <stealthcipher>
Component: cupsAssignee: Zdenek Dohnal <zdohnal>
Status: CLOSED RAWHIDE QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: low Docs Contact:
Priority: unspecified    
Version: 33CC: twaugh, zdohnal
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-11-12 13:47:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Ryan 2020-11-12 05:47:47 UTC
Description of problem:
The cups RPM scriptlet that performs a restart of the cups.socket and cups.service doesn't observe the required order for these units. 

Version-Release number of selected component (if applicable):
2.3.3-13.fc33.x86_64
2.3.3-16.fc33.x86_64

and previous versions back to F31

How reproducible:
every upgrade

Steps to Reproduce:
1. Upgrade cups, note error message
2. see journalctl -xe and confirm that the cups.socket unit failed to start because cups.service was still running.

Actual results:
sudo journalctl -xe

cups.socket: Socket service cups.service already active, refusing.
Failed to listen on CUPS Scheduler.

Expected results:
cups.service stopped before cups.socket attempted to start/stop

Additional info:
This can be worked around by manually stopping cups.service and then starting cups.socket

Comment 1 Zdenek Dohnal 2020-11-12 13:47:04 UTC
Hi Ryan,

thank you for reporting the issue!

However, if cups.service is running (either it is enabled and running by itself or activated via cups.path), cups.socket will always return an error and changing the order in RPM scriptlet didn't change anything according my testing.

%systemd_postun_with_restart uses 'systemctl try-restart' with every unit which takes as a parameter. The operation seems to be atomic, so no ordering from unit files is applied. So when it tries to restart cups.socket, it fails if cups.service is already running (regardless the reason).

The solution can be to make cups.socket listen on port 631 instead of /run/cups/cups.sock, but it is rejected by upstream due systemd bug/design https://github.com/apple/cups/issues/4930#issuecomment-264758127 .

Or disable some of those units, but it will affect the functionality - you will need to have cups.service running permanently if you disable .path and .socket.
Or if you disable cups.service, you need to start once cups.service manually (which will create keepalive file for cups.path) and have cups.socket and cups.path running.

Since it is a harmless warning - the cupsd functionality isn't affected, I'll redirect %systemd_postun_with_restart output to /dev/null.

Closing as RAWHIDE.