Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
Disable and enable an xinetd service multiple times
Steps to Reproduce:
1. As root, run (with rsync installed, or change the xinetd service)
# while :; do
sed -i 's/disable.*=.*/disable = no/' /etc/xinetd.d/rsync
service xinetd reload
sed -i 's/disable.*=.*/disable = yes/' /etc/xinetd.d/rsync
service xinetd reload
done
2. While the command is running, watch with strace.
# strace -p <pid_of_xinetd> 2>&1 | grep poll
Actual results:
The entries in the pollfd array grow over time (below, there are a 192 entries, all empty).
poll([{fd=3, events=POLLIN}, {fd=0, events=0}, {fd=0, events=0}, {fd=0, ...
events=0}, ...], 192, 4294967295) = 1 ([{fd=3, revents=POLLIN}])
poll([{fd=3, events=POLLIN}, {fd=0, events=0}, {fd=0, events=0}, {fd=0, ...
events=0}, ...], 193, 4294967295) = ? ERESTART_RESTARTBLOCK (Interrupted by signal)
When it reaches 1024, xinetd spins at 100%
Expected results:
The pollfd array should not grow, when a service is simply enabled/disabled.
The entries in the pollfd array should be reused (I think?)
Additional info:
A bit concerned this could be used as a denial of service, if a user had the ability to enable/disable limited services.
I've worked around it with the change below for the moment (a bit of a guess), to re-use the entries. I feel 'getdtablesize()/ps.ros.max_descriptors' should be used somewhere here.
# diff -c xinetd.orig/service.c xinetd/service.c
*** xinetd.orig/service.c 2016-11-27 17:05:32.706461603 -0800
--- xinetd/service.c 2016-11-27 17:52:35.431318885 -0800
***************
*** 329,334 ****
--- 329,337 ----
struct service_config *scp = SVC_CONF( sp ) ;
status_e status ;
const char *func = "svc_activate" ;
+ #ifdef HAVE_POLL
+ int idx = 0;
+ #endif
/* No activation for MUXCLIENTS.
*/
***************
*** 350,357 ****
}
else
{
! sp->svc_pfd_index = ps.rws.pfds_last ;
! SVC_POLLFD( sp ) = &ps.rws.pfd_array[ps.rws.pfds_last++] ;
}
#endif /* HAVE_POLL */
--- 353,364 ----
}
else
{
! /* search the pfd_array for an empty slot. Otherwise, use a new slot */
! for (idx = 0; (idx < ps.rws.pfds_last) && ps.rws.pfd_array[idx].fd; idx++);
! sp->svc_pfd_index = idx;
! SVC_POLLFD( sp ) = &ps.rws.pfd_array[idx] ;
! if (idx == ps.rws.pfds_last)
! ps.rws.pfds_last++;
}
#endif /* HAVE_POLL */
Red Hat Enterprise Linux 6 shipped it's last Production 2 phase minor
release, RHEL 6.9, on March 21, 2016. On May 10, 2017, RHEL 6 exits
Production 2 phase and moves into Production 3 phase. For RHEL releases
in Production 3 phase, Red Hat will provide critical-impact security
fixes and urgent priority bug fixes for the last minor release but will
not provide any software enhancements or hardware enablement.
This BZ does not appear to meet the Product 3 phase inclusion criteria
described above so is being closed WONTFIX. If this BZ is critical for
your environment, please open a case in the Red Hat Customer Portal,
https://access.redhat.com, provide a thorough business justification and
ask that the BZ be re-opened for consideration. Please note, only
critical-impact security fixes and urgent priority bug fixes will be
considered, and no software enhancements or hardware enablement will be
performed.
Description of problem: Version-Release number of selected component (if applicable): How reproducible: Disable and enable an xinetd service multiple times Steps to Reproduce: 1. As root, run (with rsync installed, or change the xinetd service) # while :; do sed -i 's/disable.*=.*/disable = no/' /etc/xinetd.d/rsync service xinetd reload sed -i 's/disable.*=.*/disable = yes/' /etc/xinetd.d/rsync service xinetd reload done 2. While the command is running, watch with strace. # strace -p <pid_of_xinetd> 2>&1 | grep poll Actual results: The entries in the pollfd array grow over time (below, there are a 192 entries, all empty). poll([{fd=3, events=POLLIN}, {fd=0, events=0}, {fd=0, events=0}, {fd=0, ... events=0}, ...], 192, 4294967295) = 1 ([{fd=3, revents=POLLIN}]) poll([{fd=3, events=POLLIN}, {fd=0, events=0}, {fd=0, events=0}, {fd=0, ... events=0}, ...], 193, 4294967295) = ? ERESTART_RESTARTBLOCK (Interrupted by signal) When it reaches 1024, xinetd spins at 100% Expected results: The pollfd array should not grow, when a service is simply enabled/disabled. The entries in the pollfd array should be reused (I think?) Additional info: A bit concerned this could be used as a denial of service, if a user had the ability to enable/disable limited services. I've worked around it with the change below for the moment (a bit of a guess), to re-use the entries. I feel 'getdtablesize()/ps.ros.max_descriptors' should be used somewhere here. # diff -c xinetd.orig/service.c xinetd/service.c *** xinetd.orig/service.c 2016-11-27 17:05:32.706461603 -0800 --- xinetd/service.c 2016-11-27 17:52:35.431318885 -0800 *************** *** 329,334 **** --- 329,337 ---- struct service_config *scp = SVC_CONF( sp ) ; status_e status ; const char *func = "svc_activate" ; + #ifdef HAVE_POLL + int idx = 0; + #endif /* No activation for MUXCLIENTS. */ *************** *** 350,357 **** } else { ! sp->svc_pfd_index = ps.rws.pfds_last ; ! SVC_POLLFD( sp ) = &ps.rws.pfd_array[ps.rws.pfds_last++] ; } #endif /* HAVE_POLL */ --- 353,364 ---- } else { ! /* search the pfd_array for an empty slot. Otherwise, use a new slot */ ! for (idx = 0; (idx < ps.rws.pfds_last) && ps.rws.pfd_array[idx].fd; idx++); ! sp->svc_pfd_index = idx; ! SVC_POLLFD( sp ) = &ps.rws.pfd_array[idx] ; ! if (idx == ps.rws.pfds_last) ! ps.rws.pfds_last++; } #endif /* HAVE_POLL */