Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1820709

Summary: python-linux-procfs: pflags exception getting info for completed PID
Product: Red Hat Enterprise Linux 8 Reporter: Mike Stowell <mstowell>
Component: python-linux-procfsAssignee: John Kacur <jkacur>
Status: CLOSED ERRATA QA Contact: Mike Stowell <mstowell>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 8.6CC: bhu, jkacur, qzhao, rt-maint
Target Milestone: rcKeywords: Triaged
Target Release: 8.6Flags: pm-rhel: mirror+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: python-linux-procfs-0.6.3-3.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 2012288 (view as bug list) Environment:
Last Closed: 2022-05-10 15:24:48 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1802014, 2012288, 2020013    

Description Mike Stowell 2020-04-03 16:05:27 UTC
When running 'pflags', there is potential that from the time it collects all PIDs to the time it investigates each /proc/$pid/stat, if the PID finishes, an exception will be thrown since /proc/$pid/stat no longer exists.

Example reproducer - rapidly generate some 'sleep' PIDs:
~$ while true ; do sleep 1 & sleep 0.01; done 1>/dev/null 2>&1 &
[1] 14338

Then try running pflags:
~$ pflags
Traceback (most recent call last):
  File "/usr/bin/pflags", line 76, in <module>
    main(sys.argv)
  File "/usr/bin/pflags", line 53, in main
    len_comms = [len(ps[pid]["stat"]["comm"]) for pid in pids if pid in ps]
  File "/usr/bin/pflags", line 53, in <listcomp>
    len_comms = [len(ps[pid]["stat"]["comm"]) for pid in pids if pid in ps]
  File "/usr/lib/python3.6/site-packages/procfs/procfs.py", line 325, in __getitem__
    setattr(self, attr, sclass(self.pid, self.basedir))
  File "/usr/lib/python3.6/site-packages/procfs/procfs.py", line 121, in __init__
    self.load(basedir)
  File "/usr/lib/python3.6/site-packages/procfs/procfs.py", line 142, in load
    f = open("%s/%d/stat" % (basedir, self.pid))
FileNotFoundError: [Errno 2] No such file or directory: '/proc/14697/stat'

Tested with:
~$ rpm -qf $(which pflags)
python3-linux-procfs-0.6-7.el8.noarch

Comment 1 Mike Stowell 2020-04-03 16:19:49 UTC
Another reproducer using tuna-0.14-4.el8.noarch:

	~$ while true ; do sleep 1 & sleep 0.01; done 1>/dev/null 2>&1 &
	[1] 362818

	~$ tuna --cpus=0,1 --run='ps all'

	<...snip...>
		
	0     0  367734  362818  20   0   7280   816 hrtime S    pts/0      0:00 sleep 1
	0     0  367736    9523  20   0  47440 11976 -      S+   pts/0      0:00 /usr/libexec/platform-python /usr/bin/tuna --cpus=0,1 --run=ps all
	0     0  367737  362818  20   0   7280   824 hrtime S    pts/0      0:00 sleep 1
	0     0  367739  362818  20   0   7280   772 hrtime S    pts/0      0:00 sleep 1
	0     0  367741  362818  20   0   7280   848 hrtime S    pts/0      0:00 sleep 1
	0     0  367743  362818  20   0   7280   712 hrtime S    pts/0      0:00 sleep 1
	0     0  367745  367736  20   0  45404  3644 -      R+   pts/0      0:00 ps all
	0     0  367746  362818  20   0   7280   780 hrtime S    pts/0      0:00 sleep 1
	Traceback (most recent call last):
	  File "/usr/bin/tuna", line 387, in thread_mapper
	    return ps.find_by_regex(re.compile(fnmatch.translate(s)))
	  File "/usr/lib/python3.6/site-packages/procfs/procfs.py", line 489, in find_by_regex
	    for pid in self.processes.keys():
	RuntimeError: dictionary changed size during iteration

	During handling of the above exception, another exception occurred:

	Traceback (most recent call last):
	  File "/usr/bin/tuna", line 711, in <module>
	    main()
	  File "/usr/bin/tuna", line 669, in main
	    list(map(thread_mapper, a.split(","))))
	  File "/usr/bin/tuna", line 389, in thread_mapper
	    return ps.find_by_name(s)
	  File "/usr/lib/python3.6/site-packages/procfs/procfs.py", line 475, in find_by_name
	    for pid in self.processes.keys():
	RuntimeError: dictionary changed size during iteration

Comment 2 John Kacur 2020-04-14 19:20:36 UTC
Note that this is two problems

1. A process disappears before it can be added to the dictionary
2. A process disappears after it is added to the dictionary but before use in find_by_name

Comment 16 John Kacur 2021-11-30 23:28:09 UTC
*** Bug 2027482 has been marked as a duplicate of this bug. ***

Comment 23 errata-xmlrpc 2022-05-10 15:24:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (python-linux-procfs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:2064