Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 208491 Details for
Bug 245535
wait_for_completion scheduling with irqs disabled
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
Patch to not run a softirq from a hardirq directly
dont-run-softirq-in-hardirq.patch (text/x-patch), 2.71 KB, created by
Clark Williams
on 2007-09-27 14:17:02 UTC
(
hide
)
Description:
Patch to not run a softirq from a hardirq directly
Filename:
MIME Type:
Creator:
Clark Williams
Created:
2007-09-27 14:17:02 UTC
Size:
2.71 KB
patch
obsolete
>Index: linux-2.6.21-rt-hack/kernel/irq/manage.c >=================================================================== >--- linux-2.6.21-rt-hack.orig/kernel/irq/manage.c >+++ linux-2.6.21-rt-hack/kernel/irq/manage.c >@@ -761,17 +761,9 @@ static int do_irqd(void * __desc) > struct irq_desc *desc = __desc; > > #ifdef CONFIG_SMP >- cpumask_t cpus_allowed, mask; >- int pinned_cpu; >+ cpumask_t cpus_allowed; > > cpus_allowed = desc->affinity; >- /* >- * Restrict it to one cpu so we avoid being migrated inside of >- * do_softirq_from_hardirq() >- */ >- pinned_cpu = first_cpu(desc->affinity); >- mask = cpumask_of_cpu(pinned_cpu); >- set_cpus_allowed(current, mask); > #endif > current->flags |= PF_NOFREEZE | PF_HARDIRQ; > >@@ -795,16 +787,9 @@ static int do_irqd(void * __desc) > /* > * Did IRQ affinities change? > */ >- if (!cpu_isset(pinned_cpu, desc->affinity)) { >+ if (!cpus_equal(cpus_allowed, desc->affinity)) { > cpus_allowed = desc->affinity; >- /* >- * Restrict it to one cpu so we avoid being >- * migrated inside of >- * do_softirq_from_hardirq() >- */ >- pinned_cpu = first_cpu(desc->affinity); >- mask = cpumask_of_cpu(pinned_cpu); >- set_cpus_allowed(current, mask); >+ set_cpus_allowed(current, cpus_allowed); > } > #endif > schedule(); >Index: linux-2.6.21-rt-hack/kernel/softirq.c >=================================================================== >--- linux-2.6.21-rt-hack.orig/kernel/softirq.c >+++ linux-2.6.21-rt-hack/kernel/softirq.c >@@ -28,6 +28,26 @@ > #include <linux/tick.h> > > #include <asm/irq.h> >+ >+#ifdef CONFIG_SMP >+/* >+ * There's too many races to do this right now. >+ * >+ * First, a hard irq thread can migrate to other CPUS. >+ * That would hurt the softirq function. >+ * >+ * Second, if we try to pin the IRQ thread to a CPU while >+ * it runs the softirq. We in essence will disable the IRQ >+ * if a higher priority process comes in and preempts it. >+ * So, even if the IRQ's affinity is fine for other CPUs >+ * it will need to wait for the High priority task to >+ * release the CPU to let the IRQ thread to finish. >+ * >+ * On Uniprocessor, it's fine to do. >+ */ >+#define DISABLE_SOFTIRQ_FROM_HARDIRQ 1 >+#endif >+ > /* > - No shared variables, all the data are CPU local. > - If a softirq needs serialization, let it serialize itself >@@ -102,7 +122,8 @@ static void wakeup_softirqd(int softirq) > > if (unlikely(!tsk)) > return; >-#if 1 >+ >+#ifndef DISABLE_SOFTIRQ_FROM_HARDIRQ > #if defined(CONFIG_PREEMPT_SOFTIRQS) && defined(CONFIG_PREEMPT_HARDIRQS) > /* > * Optimization: if we are in a hardirq thread context, and >@@ -412,6 +433,10 @@ void do_softirq_from_hardirq(void) > { > unsigned long p_flags; > >+#ifdef DISABLE_SOFTIRQ_FROM_HARDIRQ >+ return; >+#endif >+ > /* > * 'immediate' softirq execution, from hardirq context: > */
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Raw
Actions:
View
Attachments on
bug 245535
:
208481
| 208491