Red Hat Bugzilla – Bug 230339
The fatal error "Segmentation fault" happens when lots of continuous processes of mount.nfs4 are executed.
Last modified: 2010-10-22 09:24:08 EDT
Description of problem:
In the lots of continuous processes of mount.nfs4 operation, the failure case will
usually happen, exit with the retval "EINVAL" and the error msg "Segmentation
Version-Release number of selected component (if applicable):
In the lots of continuous processs of mount.nfs4 operation, the failure case
will usually happen, exit with the retval "EINVAL" and the error msg
Steps to Reproduce:
1. execute a elf-format excutable file (for example, mount.nfs4)
2. repeat the step 1 for about 2000~8192 times
I have investigated the problem, and found the cause is the problem with the
binary loader of the kernel when the error "Segmentation fault" happened.
I think that the cause for the "segment fault" is the limitation of the
design in the load_elf_binary and load_elf_interp().
The operation flow of the getting elf_interp map address and judgment is as follows.
| - do_execve
| - search_binary_handler
|- linux_binfmt= elf_format
| - elf_entry = load_elf_interp()
| | vaddr = eppnt->p_vaddr;
| *| kernel_read（..., &eppnt,..）*
| | *map_addr = elf_map()*
| | *if (BAD_ADDR(map_addr)*
| | |-load_addr = map_addr -
| |return load_addr
|----- if (BAD_ADDR(elf_entry))
| |--- elf_entry = elf_entry +
|- if (BAD_ADDR(elf_entry))
| force_sig(SIGSEGV, current);
| retval =-EINVAL;
In the do_execve(), after setuping up some data structure, the do_execve() will
invoke the search_binary_handler() to get the corresponding ELF binary loader
for the mount.nfs4, then read the ELF executable image into memory, for each
segments and sections,include interp segment. In our test, when the "segment
fault" of mount.nfs4 happened, in the procedure of load_elf_binary(), the
address elf_entry of the interp segment read from the load_elf_interp() was
fault, it was judged a BAD_ADDR and afterwards the kernel send a forcible signal
"SIGSEGV" to the process of mount.nfs4, and exit with the retval "EINVAL".
Therefore, the error happended.
In the load_elf_interp(), the eppnt->p_vaddr is the virtual address of
the mapped segment, it is a fixed address, in the mount.nfs4, it is
The map_addr is return from the *elf_map(). Because the mount.nfs4 is a
ET_DYN( DYN (Shared object file)) **executable **program, the map_addr
will be a random **mapped address** return by the elf_map().
In normal case, **the map_addr is * beyond the vaddr (the relocation
adjustment address is return by the "map_addr - ELF_PAGESTART(vaddr)").
After the judgment on whether the elf_entry is BAD_ADDR, the elf_entry
will be adjusted to the user virtual address of the current process by
"elf_entry + loc->interp_elf_ex.e_entry". The adjusted elf_entry address
will be used as the pointer of the startup routine of the process.
But unluckily, for the map_addr return from the *elf_map() is **random,
it is possible that the *map_addr will be less than the vaddr, then the
problem is happened..
In the lots of continuous mount operations, the failure case will
usually happen. In the failure case, the map_addr was 7499776 , which
was less then the vaddr 7503872. When the "map_addr -
ELF_PAGESTART(vaddr)" is calculated, for they are all unsigned long
type, the result load_addr was 4294963200, then in the judgment on
whether the elf_entry is BAD_ADDR, the load_addr was considered as a
BAD_ADDR, for the load_addr is larger than the TASK_SIZE at that time.
Before the adjustment to the user virtual address, the SIGSEGV was sent,
and the process was exited.
The bug is the BAD_ADDR judgment before the user virtual address
adjustment "elf_entry + loc->interp_elf_ex.e_entry". In fact, the
address as the pointer of the startup routine is the adjusted virtual
address, the elf_entry returned from the load_elf_interp() is just
relative addrss base the load_base address. The judgment on whether the
elf_entry is BAD_ADDR should only be set after the adjusted virtual
address "elf_entry + loc->interp_elf_ex.e_entry" .
For the problem in binfmt_elf, I have made a patch for the limitation of
I have tested, after the patch is applied, the problem with the "segment
fault " can be resolved.
The attachment is the patch for the kernel of RHEL5Beta2.
Created attachment 148915 [details]
The patch for the binfmt_elf of the kernel
Something akin to the attached patch exists upstream, but the
linux-2.6-execshield.patch patch does this:
@@ -443,8 +491,7 @@ static unsigned long load_elf_interp(str
- *interp_load_addr = load_addr;
- error = ((unsigned long)interp_elf_ex->e_entry) + load_addr;
+ error = load_addr;
... will have to get w/ the author of that patch to see what's going on.
Ingo, the suggested patch here actually reverts part of the exec-shield patch.
I'm out of my area of expertise here, any comments?
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release. Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products. This request is not yet committed for inclusion in an Update
It's unclear how this scenario can be reproduced and tested,
so I have placed the kernel src.rpm and two i686 binary rpms
(PAE and non-PAE) containing your unmodified patch here:
Can you please test and verify them?
I have tested your kernel*-bz230339.
It is confirmed that the patched can really solve the problem. The patch
So, can my patch be applied in the latest product of the RHEL5 ?
You can download this test kernel from http://people.redhat.com/dzickus/el5
*** Bug 247169 has been marked as a duplicate of this bug. ***
*** Bug 294141 has been marked as a duplicate of this bug. ***
My bug 294141 was made a duplicate of this one, but there seems to be two
different changes to the exec-shield patch. There is the change in comment 3 of
this bug, and then there is the change in comment 7 of 246623. I have tried the
change in comment 7 of 246623, and it works for me.
Are there just two ways to fix the same bug, or are they actually do different bugs?
There are two ways to fix the same bug. The fix that went into
the upcoming RHEL5.1 release was put into place in June, before
it was addressed in a different manner upstream and in Fedora.
confirmed fix is in the -47.el5 kernel, looks like the patch has already been
tested by the reported some time ago.
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.