Description of problem: Starting to see automount segfault in lookup mount: (gdb) bt #0 0x003570c3 in strlen () from /lib/libc.so.6 #1 0x00637d5a in lookup_mount (ap=0x83ef600, name=0xb570b2a0 ".TemporaryItems", name_len=15, context=0x83e9090) at lookup_ldap.c:2687 #2 0x002b3aa9 in do_lookup_mount (ap=0x83ef600, map=0x83ef6a0, name=0xb570b2a0 ".TemporaryItems", name_len=15) at lookup.c:659 #3 0x002b46f1 in lookup_nss_mount (ap=0x83ef600, source=0x0, name=0xb570b2a0 ".TemporaryItems", name_len=15) at lookup.c:867 #4 0x002ac0ec in do_mount_indirect (arg=0x83fb030) at indirect.c:764 #5 0x0089a73b in start_thread () from /lib/libpthread.so.0 #6 0x003b8cfe in clone () from /lib/libc.so.6 (gdb) up #1 0x00637d5a in lookup_mount (ap=0x83ef600, name=0xb570b2a0 ".TemporaryItems", name_len=15, context=0x83e9090) at lookup_ldap.c:2687 2687 mapent_len = strlen(me->mapent); (gdb) print *me Cannot access memory at address 0x0 Not quite sure how this can happen: 2686 if (me && (me->source == source || *me->key == '/')) { 2687 mapent_len = strlen(me->mapent); 2688 mapent = alloca(mapent_len + 1); 2689 strcpy(mapent, me->mapent); 2690 } Version-Release number of selected component (if applicable): autofs-5.0.1-0.rc2.131.el5_4.1 Some messages: Dec 29 16:37:51 earth automount[6121]: update_negative_cache: key "*" not found in map. Dec 29 18:58:10 earth automount[6121]: update_negative_cache: key "gourlay" not found in map. Dec 29 19:09:08 earth automount[6121]: update_negative_cache: key "gourlay" not found in map. Dec 29 19:13:10 earth automount[6121]: update_negative_cache: key "gourlay" not found in map. Dec 30 09:25:42 earth automount[6121]: update_negative_cache: key ".git" not found in map. Dec 30 09:25:42 earth automount[6121]: update_negative_cache: key "objects" not found in map. Dec 30 09:51:49 earth automount[6121]: update_negative_cache: key "local" not found in map. Dec 30 09:51:55 earth automount[6121]:last message repeated 2 times Dec 30 09:57:06 earth automount[6121]: update_negative_cache: key "*" not found in map. Dec 30 09:58:44 earth automount[6121]: update_negative_cache: key "*" not found in map. Dec 30 10:05:20 earth cfengine:earth[19541]: Restart: Starting automount: [ OK ]#015 Dec 30 10:55:31 earth automount[19936]: update_negative_cache: key "._" not found in map. Dec 30 10:55:31 earth automount[19936]: update_negative_cache: key "mach_kernel" not found in map. Dec 30 17:23:37 earth automount[19936]: update_negative_cache: key "._" not found in map. Dec 30 17:36:10 earth automount[19936]: update_negative_cache: key ".TemporaryItems" not found in map. Dec 30 17:36:10 earth automount[19936]: update_negative_cache: key ".Trashes" not found in map. Dec 30 17:36:10 earth automount[19936]: update_negative_cache: key "*" not found in map. Dec 30 18:05:21 earth cfengine:earth[21723]: Restart: Starting automount: [ OK ]#015 Let me know if running in debug mode would be useful. (gdb) thr app all bt Thread 8 (process 19936): #0 0x00f89402 in __kernel_vsyscall () #1 0x007a6a6e in do_sigwait () from /lib/libpthread.so.0 #2 0x007a6b0f in sigwait () from /lib/libpthread.so.0 #3 0x0057b388 in statemachine (arg=<value optimized out>) at automount.c:1315 #4 0x0057c89e in main (argc=0, argv=0xbf982698) at automount.c:2143 Thread 7 (process 19937): #0 0x00f89402 in __kernel_vsyscall () #1 0x007a2d12 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/libpthread.so.0 #2 0x00593707 in alarm_handler (arg=0x0) at alarm.c:223 #3 0x0079e73b in start_thread () from /lib/libpthread.so.0 #4 0x00313cfe in clone () from /lib/libc.so.6 Thread 6 (process 19938): #0 0x00f89402 in __kernel_vsyscall () #1 0x007a2d12 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/libpthread.so.0 #2 0x0058c1e1 in st_queue_handler (arg=0x0) at state.c:1080 #3 0x0079e73b in start_thread () from /lib/libpthread.so.0 #4 0x00313cfe in clone () from /lib/libc.so.6 Thread 5 (process 19941): #0 0x00f89402 in __kernel_vsyscall () #1 0x0030a023 in poll () from /lib/libc.so.6 #2 0x0057e7b0 in handle_mounts (arg=0xbf97faa0) at automount.c:866 #3 0x0079e73b in start_thread () from /lib/libpthread.so.0 #4 0x00313cfe in clone () from /lib/libc.so.6 Thread 4 (process 19963): #0 0x00f89402 in __kernel_vsyscall () #1 0x0030a023 in poll () from /lib/libc.so.6 #2 0x0057e7b0 in handle_mounts (arg=0xbf97faa0) at automount.c:866 #3 0x0079e73b in start_thread () from /lib/libpthread.so.0 #4 0x00313cfe in clone () from /lib/libc.so.6 Thread 3 (process 19972): #0 0x00f89402 in __kernel_vsyscall () #1 0x0030a023 in poll () from /lib/libc.so.6 #2 0x0057e7b0 in handle_mounts (arg=0xbf97faa0) at automount.c:866 #3 0x0079e73b in start_thread () from /lib/libpthread.so.0 #4 0x00313cfe in clone () from /lib/libc.so.6 Thread 2 (process 19973): #0 0x00f89402 in __kernel_vsyscall () #1 0x0030a023 in poll () from /lib/libc.so.6 #2 0x0057e7b0 in handle_mounts (arg=0xbf97faa0) at automount.c:866 #3 0x0079e73b in start_thread () from /lib/libpthread.so.0 #4 0x00313cfe in clone () from /lib/libc.so.6 Thread 1 (process 20639): ---Type <return> to continue, or q <return> to quit--- #0 0x002b20c3 in strlen () from /lib/libc.so.6 #1 0x00c9bd5a in lookup_mount (ap=0x8afb600, name=0xb57872a0 "System", name_len=6, context=0x8af5090) at lookup_ldap.c:2687 #2 0x00588aa9 in do_lookup_mount (ap=0x8afb600, map=0x8afb6a0, name=0xb57872a0 "System", name_len=6) at lookup.c:659 #3 0x005896f1 in lookup_nss_mount (ap=0x8afb600, source=0x0, name=0xb57872a0 "System", name_len=6) at lookup.c:867 #4 0x005810ec in do_mount_indirect (arg=0x8b08b60) at indirect.c:764 #5 0x0079e73b in start_thread () from /lib/libpthread.so.0 #6 0x00313cfe in clone () from /lib/libc.so.6
(In reply to comment #0) > > Not quite sure how this can happen: Me too. > > 2686 if (me && (me->source == source || *me->key == '/')) { > 2687 mapent_len = strlen(me->mapent); > 2688 mapent = alloca(mapent_len + 1); > 2689 strcpy(mapent, me->mapent); > 2690 } > > > Version-Release number of selected component (if applicable): > autofs-5.0.1-0.rc2.131.el5_4.1 > > > Some messages: > > Dec 29 16:37:51 earth automount[6121]: update_negative_cache: key "*" not found > in map. In the code there's an implicit assumption that "*" is not used as a key. Looks like I need to special case this but it's a little tricky so I'll need to have a look around first. Ian
(In reply to comment #1) > > Some messages: > > > > Dec 29 16:37:51 earth automount[6121]: update_negative_cache: key "*" not found > > in map. > > In the code there's an implicit assumption that "*" is not used as > a key. Looks like I need to special case this but it's a little > tricky so I'll need to have a look around first. Do you have a wild card entry in your map, either a key of "*" or (in LDAP) "/"? Ian
(In reply to comment #2) > Do you have a wild card entry in your map, either a key of "*" > or (in LDAP) "/"? Nope, there are the auto.master entries of: cn: /home cn: /data cn: /data4 cn: /nfs that have a /, but otherwise just normal alphanumeric entries.
Still happening occasionally : (gdb) bt #0 0x002b20c3 in strlen () from /lib/libc.so.6 #1 0x003c3d5a in lookup_mount (ap=0x86d6600, name=0xb4dfd2a0 ".hidden", name_len=7, context=0x86e7848) at lookup_ldap.c:2687 #2 0x00e89aa9 in do_lookup_mount (ap=0x86d6600, map=0x86d66a0, name=0xb4dfd2a0 ".hidden", name_len=7) at lookup.c:659 #3 0x00e8a6f1 in lookup_nss_mount (ap=0x86d6600, source=0x0, name=0xb4dfd2a0 ".hidden", name_len=7) at lookup.c:867 #4 0x00e820ec in do_mount_indirect (arg=0x87a0cc0) at indirect.c:764 #5 0x0073a73b in start_thread () from /lib/libpthread.so.0 #6 0x00313cfe in clone () from /lib/libc.so.6 last message: Feb 15 16:37:55 earth automount[22071]: update_negative_cache: key "*" not found in map. Probably smb access from a mac causing havoc. Feb 15 16:37:54 earth automount[22071]: update_negative_cache: key "System" not found in map. Feb 15 16:37:54 earth automount[22071]: update_negative_cache: key "._.DS_Store" not found in map. Feb 15 16:37:54 earth automount[22071]: update_negative_cache: key ".Spotlight-V100" not found in map. Feb 15 16:37:54 earth automount[22071]: update_negative_cache: key "._Backups.backupdb" not found in map. Feb 15 16:37:54 earth automount[22071]: update_negative_cache: key "Backups.backupdb" not found in map. Feb 15 16:37:55 earth automount[22071]: update_negative_cache: key "*" not found in map.
Thanks for the bump on this. A fresh look at this makes me think the patch below might help.
Created attachment 394438 [details] Patch - mapent becomes negative during lookup
A test package with the above patch is available at: http://people.redhat.com/~ikent/autofs-5.0.1-0.rc2.139.bz551599.1.el5 Could you test this out please. Ian
So far so good. Don't have a 100% reproducer though, but it survived some initial mac browsing. Will keep an eye on it.
I think I am also seeing this problem, as the backtrace looks rather similar on the core files I have: #0 0x00002aaaaaab7281 in lookup_mount () from /usr/lib64/autofs/lookup_ldap.so #1 0x00002ba166749511 in lookup_nss_mount () from /usr/sbin/automount #2 0x00002ba166742254 in ?? () from /usr/sbin/automount #3 0x00002ba166b9e617 in start_thread () from /lib64/libpthread.so.0 #4 0x00002ba167a6ec2d in clone () from /lib64/libc.so.6 The error line I'm seeing in syslog is: Feb 18 10:00:28 sunfire42 kernel: automount[18420]: segfault at 00002ba16a65e3c0 rip 00002aaaaaab7281 rsp 000000004af46c20 error 4 However, I do not have a corresponding line with "*" as a map name. It's happened twice so far, once on 8 Feb and once today (18 Feb). I still have both core files available, and can send them in if they'll be of any value. Meanwhile, I will try the patch.
I installed the release candidate package on my mail server, where I was seeing automount crash. It crashed again yesterday evening. The backtrace in the core file is: #0 0x00002aaaaaab704e in lookup_mount () from /usr/lib64/autofs/lookup_ldap.so #1 0x00002af8d9a19621 in lookup_nss_mount () from /usr/sbin/automount #2 0x00002af8d9a12294 in ?? () from /usr/sbin/automount #3 0x00002af8d9e6e617 in start_thread () from /lib64/libpthread.so.0 #4 0x00002af8dad3ec2d in clone () from /lib64/libc.so.6 The error line in syslog, with the ones leading up to it (the prior automount log entries date to 18:29:00): Mar 3 18:56:30 sunfire42 automount[25326]: update_negative_cache: key "lib" not found in map. Mar 3 18:56:30 sunfire42 automount[25326]: update_negative_cache: key "lib" not found in map. Mar 3 18:56:30 sunfire42 automount[25326]: update_negative_cache: key "man" not found in map. Mar 3 18:56:30 sunfire42 kernel: automount[15394]: segfault at 00002af8eca007e0 rip 00002aaaaaab704e rsp 000000004c199c20 error 4 The three lines preceding the segfault do not follow the pattern of prior log entries. In all the prior log entries, the pattern is: Mar 3 18:29:00 sunfire42 automount[25326]: update_negative_cache: key "lib" not found in map. Mar 3 18:29:00 sunfire42 automount[25326]: update_negative_cache: key "lib" not found in map. Mar 3 18:29:00 sunfire42 automount[25326]: update_negative_cache: key "man" not found in map. Mar 3 18:29:00 sunfire42 automount[25326]: update_negative_cache: key "man" not found in map. In the segfault, the second "man" did not appear. I have both prior core files available, as well as the one from this crash, if they will be of any value.
(In reply to comment #10) > I installed the release candidate package on my mail server, where I was seeing > automount crash. It crashed again yesterday evening. The backtrace in the > core file is: > > #0 0x00002aaaaaab704e in lookup_mount () from /usr/lib64/autofs/lookup_ldap.so > #1 0x00002af8d9a19621 in lookup_nss_mount () from /usr/sbin/automount > #2 0x00002af8d9a12294 in ?? () from /usr/sbin/automount > #3 0x00002af8d9e6e617 in start_thread () from /lib64/libpthread.so.0 > #4 0x00002af8dad3ec2d in clone () from /lib64/libc.so.6 Useless, no line numbers. Please post the backtrace output from all threads using "thr a a bt"
(gdb) thr a a bt Thread 19 (process 25326): #0 0x00002af8d9e76658 in do_sigwait () from /lib64/libpthread.so.0 #1 0x00002af8d9e766fd in sigwait () from /lib64/libpthread.so.0 #2 0x00002af8d9a0d59d in _start () from /usr/sbin/automount Thread 18 (process 25327): #0 0x00002af8d9e72f70 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00002af8d9a2261c in ?? () from /usr/sbin/automount #2 0x00002af8d9e6e617 in start_thread () from /lib64/libpthread.so.0 #3 0x00002af8dad3ec2d in clone () from /lib64/libc.so.6 Thread 17 (process 25328): #0 0x00002af8d9e72f70 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 #1 0x00002af8d9a1bc48 in ?? () from /usr/sbin/automount #2 0x00002af8d9e6e617 in start_thread () from /lib64/libpthread.so.0 #3 0x00002af8dad3ec2d in clone () from /lib64/libc.so.6 Thread 16 (process 25331): #0 0x00002af8dad35e46 in poll () from /lib64/libc.so.6 #1 0x00002af8d9a10244 in handle_mounts () from /usr/sbin/automount #2 0x00002af8d9e6e617 in start_thread () from /lib64/libpthread.so.0 #3 0x00002af8dad3ec2d in clone () from /lib64/libc.so.6 Thread 15 (process 25334): #0 0x00002af8dad35e46 in poll () from /lib64/libc.so.6 #1 0x00002af8d9a10244 in handle_mounts () from /usr/sbin/automount #2 0x00002af8d9e6e617 in start_thread () from /lib64/libpthread.so.0 #3 0x00002af8dad3ec2d in clone () from /lib64/libc.so.6 Thread 14 (process 25335): #0 0x00002af8dad35e46 in poll () from /lib64/libc.so.6 #1 0x00002af8d9a10244 in handle_mounts () from /usr/sbin/automount #2 0x00002af8d9e6e617 in start_thread () from /lib64/libpthread.so.0 #3 0x00002af8dad3ec2d in clone () from /lib64/libc.so.6 Thread 13 (process 25336): #0 0x00002af8dad35e46 in poll () from /lib64/libc.so.6 #1 0x00002af8d9a10244 in handle_mounts () from /usr/sbin/automount #2 0x00002af8d9e6e617 in start_thread () from /lib64/libpthread.so.0 #3 0x00002af8dad3ec2d in clone () from /lib64/libc.so.6 Thread 12 (process 25337): #0 0x00002af8dad35e46 in poll () from /lib64/libc.so.6 #1 0x00002af8d9a10244 in handle_mounts () from /usr/sbin/automount #2 0x00002af8d9e6e617 in start_thread () from /lib64/libpthread.so.0 #3 0x00002af8dad3ec2d in clone () from /lib64/libc.so.6 Thread 11 (process 25338): #0 0x00002af8dad35e46 in poll () from /lib64/libc.so.6 #1 0x00002af8d9a10244 in handle_mounts () from /usr/sbin/automount #2 0x00002af8d9e6e617 in start_thread () from /lib64/libpthread.so.0 #3 0x00002af8dad3ec2d in clone () from /lib64/libc.so.6 Thread 10 (process 25339): #0 0x00002af8dad35e46 in poll () from /lib64/libc.so.6 #1 0x00002af8d9a10244 in handle_mounts () from /usr/sbin/automount #2 0x00002af8d9e6e617 in start_thread () from /lib64/libpthread.so.0 #3 0x00002af8dad3ec2d in clone () from /lib64/libc.so.6 Thread 9 (process 25340): #0 0x00002af8dad35e46 in poll () from /lib64/libc.so.6 #1 0x00002af8d9a10244 in handle_mounts () from /usr/sbin/automount #2 0x00002af8d9e6e617 in start_thread () from /lib64/libpthread.so.0 #3 0x00002af8dad3ec2d in clone () from /lib64/libc.so.6 Thread 8 (process 25343): #0 0x00002af8dad35e46 in poll () from /lib64/libc.so.6 #1 0x00002af8d9a10244 in handle_mounts () from /usr/sbin/automount #2 0x00002af8d9e6e617 in start_thread () from /lib64/libpthread.so.0 #3 0x00002af8dad3ec2d in clone () from /lib64/libc.so.6 Thread 7 (process 25344): #0 0x00002af8dad35e46 in poll () from /lib64/libc.so.6 #1 0x00002af8d9a10244 in handle_mounts () from /usr/sbin/automount #2 0x00002af8d9e6e617 in start_thread () from /lib64/libpthread.so.0 #3 0x00002af8dad3ec2d in clone () from /lib64/libc.so.6 Thread 6 (process 25345): #0 0x00002af8dad35e46 in poll () from /lib64/libc.so.6 #1 0x00002af8d9a10244 in handle_mounts () from /usr/sbin/automount #2 0x00002af8d9e6e617 in start_thread () from /lib64/libpthread.so.0 #3 0x00002af8dad3ec2d in clone () from /lib64/libc.so.6 Thread 5 (process 25346): #0 0x00002af8dad35e46 in poll () from /lib64/libc.so.6 #1 0x00002af8d9a10244 in handle_mounts () from /usr/sbin/automount #2 0x00002af8d9e6e617 in start_thread () from /lib64/libpthread.so.0 #3 0x00002af8dad3ec2d in clone () from /lib64/libc.so.6 Thread 4 (process 25347): #0 0x00002af8dad35e46 in poll () from /lib64/libc.so.6 #1 0x00002af8d9a10244 in handle_mounts () from /usr/sbin/automount #2 0x00002af8d9e6e617 in start_thread () from /lib64/libpthread.so.0 #3 0x00002af8dad3ec2d in clone () from /lib64/libc.so.6 Thread 3 (process 25348): #0 0x00002af8dad35e46 in poll () from /lib64/libc.so.6 #1 0x00002af8d9a10244 in handle_mounts () from /usr/sbin/automount #2 0x00002af8d9e6e617 in start_thread () from /lib64/libpthread.so.0 #3 0x00002af8dad3ec2d in clone () from /lib64/libc.so.6 Thread 2 (process 25349): #0 0x00002af8dad35e46 in poll () from /lib64/libc.so.6 #1 0x00002af8d9a10244 in handle_mounts () from /usr/sbin/automount #2 0x00002af8d9e6e617 in start_thread () from /lib64/libpthread.so.0 #3 0x00002af8dad3ec2d in clone () from /lib64/libc.so.6 Thread 1 (process 15394): #0 0x00002aaaaaab704e in lookup_mount () from /usr/lib64/autofs/lookup_ldap.so #1 0x00002af8d9a19621 in lookup_nss_mount () from /usr/sbin/automount #2 0x00002af8d9a12294 in ?? () from /usr/sbin/automount #3 0x00002af8d9e6e617 in start_thread () from /lib64/libpthread.so.0 #4 0x00002af8dad3ec2d in clone () from /lib64/libc.so.6 (gdb)
(In reply to comment #12) > (gdb) thr a a bt snip ... > > Thread 1 (process 15394): > #0 0x00002aaaaaab704e in lookup_mount () from /usr/lib64/autofs/lookup_ldap.so > #1 0x00002af8d9a19621 in lookup_nss_mount () from /usr/sbin/automount > #2 0x00002af8d9a12294 in ?? () from /usr/sbin/automount > #3 0x00002af8d9e6e617 in start_thread () from /lib64/libpthread.so.0 > #4 0x00002af8dad3ec2d in clone () from /lib64/libc.so.6 > (gdb) Mmmm .. no line numbers. Guessing where this might be happening is almost always wrong I really need the line numbers. Trying to setup an identical system gives, at best, inaccurate results and usually doesn't work at all with gdb, especially with 64 bit systems. Do you have the autofs debuginfo package installed?
Do you have a wildcard entry in your map?
> Guessing where this might be happening is almost always wrong > I really need the line numbers. > > Trying to setup an identical system gives, at best, inaccurate > results and usually doesn't work at all with gdb, especially > with 64 bit systems. > > Do you have the autofs debuginfo package installed? No, I do not. It also doesn't appear in the list of packages available to install on this system. A search on RHN for packages containing autofs as part of their name turns up only autofs and autofs5 -- no debug package. Is the debug package something I can download separately and install?
(In reply to comment #14) > Do you have a wildcard entry in your map? No. Our automount map is handled through LDAP, and has upward of 3,000 entries in it (the vast majority of which are home directories), but there are none containing *. All our map entries use only the following characters (in case any others are special): +-./_ 0-9 A-Z a-z
As I was confusing: the first line there is raw characters; the "-" does not represent a range. The other three lines do represent ranges.
(In reply to comment #15) > > Guessing where this might be happening is almost always wrong > > I really need the line numbers. > > > > Trying to setup an identical system gives, at best, inaccurate > > results and usually doesn't work at all with gdb, especially > > with 64 bit systems. > > > > Do you have the autofs debuginfo package installed? > > No, I do not. It also doesn't appear in the list of packages available to > install on this system. A search on RHN for packages containing autofs as part > of their name turns up only autofs and autofs5 -- no debug package. > > Is the debug package something I can download separately and install? I don't use RHN so I can't help you with that but I always provide debuginfo packages when I provide test packages. Also building the srpm locally will produce the debuginfo package along with the base package. The test package specified in comment #7 needs to be updated to the latest revision. I'll get to that as soon as I can.
Just updated to 5.5 and autofs-5.0.1-0.rc2.143.el5 and promptly got a crash again. Could we get an updated package? Thanks!
This request was evaluated by Red Hat Product Management for inclusion in the current release of Red Hat Enterprise Linux. Because the affected component is not scheduled to be updated in the current release, Red Hat is unfortunately unable to address this request at this time. Red Hat invites you to ask your support representative to propose this request, if appropriate and relevant, in the next release of Red Hat Enterprise Linux.
Looks like another autofs update (autofs-5.0.1-0.rc2.143.el5_5.4) came through without this fix. Can this please get into the update stream? Thanks!
(In reply to comment #22) > Looks like another autofs update (autofs-5.0.1-0.rc2.143.el5_5.4) came through > without this fix. Can this please get into the update stream? Thanks! I thought I'd had produced an updated test package after your previous alert. Sorry, I'll do that now.
(In reply to comment #23) > (In reply to comment #22) > > Looks like another autofs update (autofs-5.0.1-0.rc2.143.el5_5.4) came through > > without this fix. Can this please get into the update stream? Thanks! > > I thought I'd had produced an updated test package after your > previous alert. Sorry, I'll do that now. Please try this updated package: http://people.redhat.com/~ikent/autofs-5.0.1-0.rc2.147.bz551599.1.el5
Installed. Seems fine.
hi Ian, Do you have any reproducer for the bug?
(In reply to comment #27) > hi Ian, > Do you have any reproducer for the bug? No, I've never been able to reproduce it. The customer backtraces clearly pointed to the problem though. And, although there are some additional changes to deal with side effects, the changes have been in use for some time, upstream, in various test packages, and in some z-stream updates. So I consider the changes to be stable and to resolve the problem.
This request was erroneously denied for the current release of Red Hat Enterprise Linux. The error has been fixed and this request has been re-proposed for the current release.
do code review according to comment #28, verified patch autofs-5.0.1-mapent-becomes-negative-during-lookup.patch is being applied in autofs-5.0.1-0.rc2.156.el5.
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2011-1079.html