Login
[x]
Log in using an account from:
Fedora Account System
Red Hat Associate
Red Hat Customer
Or login using a Red Hat Bugzilla account
Forgot Password
Login:
Hide Forgot
Create an Account
Red Hat Bugzilla – Attachment 280161 Details for
Bug 254195
use after free in nlm subsystem
[?]
New
Simple Search
Advanced Search
My Links
Browse
Requests
Reports
Current State
Search
Tabular reports
Graphical reports
Duplicates
Other Reports
User Changes
Plotly Reports
Bug Status
Bug Severity
Non-Defaults
|
Product Dashboard
Help
Page Help!
Bug Writing Guidelines
What's new
Browser Support Policy
5.0.4.rh83 Release notes
FAQ
Guides index
User guide
Web Services
Contact
Legal
This site requires JavaScript to be enabled to function correctly, please enable it.
[patch]
patch -- Add lockd reference counting and clean up lockd startup/shutdown (rhel5)
0002-BZ-254195-NLM-Add-lockd-reference-counting-and-cle.patch (text/plain), 6.72 KB, created by
Jeff Layton
on 2007-12-06 19:32:35 UTC
(
hide
)
Description:
patch -- Add lockd reference counting and clean up lockd startup/shutdown (rhel5)
Filename:
MIME Type:
Creator:
Jeff Layton
Created:
2007-12-06 19:32:35 UTC
Size:
6.72 KB
patch
obsolete
>From c6342f6c43865ae9e9d8bc8fb5952f4222d90c6e Mon Sep 17 00:00:00 2001 >From: Jeff Layton <jlayton@redhat.com> >Date: Thu, 6 Dec 2007 14:24:07 -0500 >Subject: [PATCH] BZ#254195: NLM: Add lockd reference counting and clean up lockd startup/shutdown > >When a lock that a client is blocking on comes free, lockd does this in >nlmsvc_grant_blocked(): > > nlm_async_call(block->b_call, NLMPROC_GRANTED_MSG, &nlmsvc_grant_ops); > >the callback from this call is nlmsvc_grant_callback(). That function >does this at the end to wake up lockd: > > svc_wake_up(block->b_daemon); > >However there is no guarantee that lockd will be up when this happens. >If someone shuts down or restarts lockd before the async call completes, >then the b_daemon pointer will point to freed memory. > >This patch adds a new reference counter to lockd. lockd_down/up >increment this counter when the nlmsvc_users count transitions to/from >0. > >It changes lockd to shut only if the nlmsvc_ref goes to 0. When checking >for this, lockd takes the nlmsvc_mutex and holds on to it until lockd is >down. This prevents multiple lockd's from running at any given time >since lockd_up won't be able to complete until lockd is all the way >down. Also, with this patch, there's no real reason to have >lockd_down() wait for lockd to come down, so this patch has it return >after sending the signal to lockd. > >Finally, to prevent the oops when there are active blocks, we have it >take a nlmsvc_ref when nlm_blocked is first made non-empty and drop >nlmsvc_ref when last block is removed from nlm_blocked. > >Signed-off-by: Jeff Layton <jlayton@redhat.com> >--- > fs/lockd/svc.c | 71 +++++++++++++++++++++---------------------- > fs/lockd/svclock.c | 4 ++ > include/linux/lockd/lockd.h | 1 + > 3 files changed, 40 insertions(+), 36 deletions(-) > >diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c >index 151777f..d4865de 100644 >--- a/fs/lockd/svc.c >+++ b/fs/lockd/svc.c >@@ -52,12 +52,12 @@ EXPORT_SYMBOL(nlmsvc_ops); > static DEFINE_MUTEX(nlmsvc_mutex); > static unsigned int nlmsvc_users; > static pid_t nlmsvc_pid; >+atomic_t nlmsvc_ref = ATOMIC_INIT(0); > static struct svc_serv *nlmsvc_serv; > int nlmsvc_grace_period; > unsigned long nlmsvc_timeout; > > static DECLARE_COMPLETION(lockd_start_done); >-static DECLARE_WAIT_QUEUE_HEAD(lockd_exit); > > /* > * These can be set at insmod time (useful for NFS as root filesystem), >@@ -140,12 +140,16 @@ lockd(struct svc_rqst *rqstp) > > /* > * The main request loop. We don't terminate until the last >- * NFS mount or NFS daemon has gone away, and we've been sent a >- * signal, or else another process has taken over our job. >+ * NFS mount or NFS daemon has gone away, and the nlm_blocked >+ * list is empty. The nlmsvc_mutex ensures that we prevent >+ * a new lockd from being started before the old lockd is >+ * down when lockd is restarted. > */ >- while ((nlmsvc_users || !signalled()) && nlmsvc_pid == current->pid) { >+ mutex_lock(&nlmsvc_mutex); >+ while (atomic_read(&nlmsvc_ref) != 0) { > long timeout = MAX_SCHEDULE_TIMEOUT; > >+ mutex_unlock(&nlmsvc_mutex); > if (signalled()) { > flush_signals(current); > if (nlmsvc_ops) { >@@ -171,11 +175,11 @@ lockd(struct svc_rqst *rqstp) > */ > err = svc_recv(serv, rqstp, timeout); > if (err == -EAGAIN || err == -EINTR) >- continue; >+ goto again; > if (err < 0) { > printk(KERN_WARNING >- "lockd: terminating on error %d\n", >- -err); >+ "lockd: terminating on error %d\n", -err); >+ mutex_lock(&nlmsvc_mutex); > break; > } > >@@ -183,25 +187,23 @@ lockd(struct svc_rqst *rqstp) > (unsigned)ntohl(rqstp->rq_addr.sin_addr.s_addr)); > > svc_process(serv, rqstp); >- >+again: >+ mutex_lock(&nlmsvc_mutex); > } > >- flush_signals(current); >- > /* >- * Check whether there's a new lockd process before >- * shutting down the hosts and clearing the slot. >+ * at this point lockd is committed to going down. We hold the >+ * nlmsvc_mutex until just before exit to prevent a new one >+ * from starting before it's down. > */ >- if (!nlmsvc_pid || current->pid == nlmsvc_pid) { >- if (nlmsvc_ops) >- nlmsvc_invalidate_all(); >- nlm_shutdown_hosts(); >- nlmsvc_pid = 0; >- nlmsvc_serv = NULL; >- } else >- printk(KERN_DEBUG >- "lockd: new process, skipping host shutdown\n"); >- wake_up(&lockd_exit); >+ flush_signals(current); >+ >+ if (nlmsvc_ops) >+ nlmsvc_invalidate_all(); >+ nlm_shutdown_hosts(); >+ nlmsvc_pid = 0; >+ nlmsvc_serv = NULL; >+ mutex_unlock(&nlmsvc_mutex); > > /* Exit the RPC thread */ > svc_exit_thread(rqstp); >@@ -254,6 +256,10 @@ lockd_up_proto(int proto) > int error = 0; > > mutex_lock(&nlmsvc_mutex); >+ >+ if (!nlmsvc_users) >+ atomic_inc(&nlmsvc_ref); >+ > /* > * Check whether we're already up and running. > */ >@@ -304,6 +310,12 @@ lockd_up_proto(int proto) > destroy_and_out: > svc_destroy(serv); > out: >+ /* >+ * don't leave refcount high if there were already users, or if there >+ * was an error in starting up lockd >+ */ >+ if (!nlmsvc_users && error) >+ atomic_dec(&nlmsvc_ref); > if (!error) > nlmsvc_users++; > mutex_unlock(&nlmsvc_mutex); >@@ -342,21 +354,8 @@ lockd_down(void) > } > warned = 0; > >+ atomic_dec(&nlmsvc_ref); > kill_proc(nlmsvc_pid, SIGKILL, 1); >- /* >- * Wait for the lockd process to exit, but since we're holding >- * the lockd semaphore, we can't wait around forever ... >- */ >- clear_thread_flag(TIF_SIGPENDING); >- wait_event_timeout(lockd_exit, nlmsvc_pid == 0, HZ); >- if (nlmsvc_pid) { >- printk(KERN_WARNING >- "lockd_down: lockd failed to exit, clearing pid\n"); >- nlmsvc_pid = 0; >- } >- spin_lock_irq(¤t->sighand->siglock); >- recalc_sigpending(); >- spin_unlock_irq(¤t->sighand->siglock); > out: > mutex_unlock(&nlmsvc_mutex); > } >diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c >index 0377832..15705ae 100644 >--- a/fs/lockd/svclock.c >+++ b/fs/lockd/svclock.c >@@ -60,6 +60,8 @@ nlmsvc_insert_block(struct nlm_block *block, unsigned long when) > struct nlm_block **bp, *b; > > dprintk("lockd: nlmsvc_insert_block(%p, %ld)\n", block, when); >+ if (nlm_blocked == NULL) >+ atomic_inc(&nlmsvc_ref); > kref_get(&block->b_count); > if (block->b_queued) > nlmsvc_remove_block(block); >@@ -241,6 +243,8 @@ static int nlmsvc_unlink_block(struct nlm_block *block) > /* Remove block from list */ > status = posix_unblock_lock(block->b_file->f_file, &block->b_call->a_args.lock.fl); > nlmsvc_remove_block(block); >+ if (nlm_blocked == NULL) >+ atomic_dec(&nlmsvc_ref); > return status; > } > >diff --git a/include/linux/lockd/lockd.h b/include/linux/lockd/lockd.h >index 0d92c46..2bd67b3 100644 >--- a/include/linux/lockd/lockd.h >+++ b/include/linux/lockd/lockd.h >@@ -141,6 +141,7 @@ extern struct svc_procedure nlmsvc_procedures[]; > #ifdef CONFIG_LOCKD_V4 > extern struct svc_procedure nlmsvc_procedures4[]; > #endif >+extern atomic_t nlmsvc_ref; > extern int nlmsvc_grace_period; > extern unsigned long nlmsvc_timeout; > >-- >1.5.3.3 >
You cannot view the attachment while viewing its details because your browser does not support IFRAMEs.
View the attachment on a separate page
.
View Attachment As Diff
View Attachment As Raw
Actions:
View
|
Diff
Attachments on
bug 254195
:
277081
|
278361
|
280141
|
280151
|
280161
|
306390
|
306391
|
306392