Bug 240121 - nash receives a SIGSEGV while processing a DHCPACK
nash receives a SIGSEGV while processing a DHCPACK
Status: CLOSED WORKSFORME
Product: Fedora
Classification: Fedora
Component: mkinitrd (Show other bugs)
rawhide
ppc64 Linux
high Severity high
: ---
: ---
Assigned To: Peter Jones
David Lawrence
:
Depends On:
Blocks: 240434
  Show dependency treegraph
 
Reported: 2007-05-15 07:58 EDT by Jochen Roth
Modified: 2007-11-30 17:12 EST (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2007-09-11 17:31:36 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Jochen Roth 2007-05-15 07:58:46 EDT
Description of problem:
After creating a initrd suitable for booting a nfsroot environment, nash
receives a SIGSEGV while processing a DHCPACK. 

Version-Release number of selected component (if applicable):
nash-6.0.9-1
mkinitrd-6.0.9-1
libdhcp4client-3.0.5-34.fc7
libdhcp-1.24-4.fc7

How reproducible:
always

Steps to Reproduce:
1. Install kernel-2.6.21-1.3125 rpm 
2. mkinitrd --with=tg3 --rootopts=ro,nolock --net-dev=eth0 --ro
otdev=10.64.0.1:/nfsroot/10.64.4.33 --rootfs=nfs /boot/initrd-2.6.2
1-1.3125.img  2.6.21-1.3125
3. mkzimage /boot/vmlinuz-2.6.21-1.3125 no no /boot/initrd-2.6.
21-1.3125.img /usr/share/ppc64-utils/zImage.stub /boot/zImage.initrd-2.6.21-1
.3125.fc7
4. boot created zImage

Additional info:
tg3: eth0: Link is up at 1000 Mbps, full duplex.
tg3: eth0: Link is up at 1000 Mbps, full duplex.
tg3: eth0: Flow control is off for TX and off for RX.
tg3: eth0: Flow control is off for TX and off for RX.
DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 9
DHCPOFFER from 10.64.0.1
DHCPREQUEST on eth0 to 255.255.255.255 port 67
DHCPACK from 10.64.0.1
nash received SIGSEGV!  Backtrace:
/bin/nash [0x1000f7c0] (0x1000f7c0)
[0x100448] (0x100448)
/usr/lib64/libdhcp.so.1(dhcpv4_lease-0x28cf0) [0x400002f05b8] (0x400002f05b8)
/usr/lib64/libdhcp.so.1(dhcp4_nic_callback-0x25d10) [0x400002f3698] (0x400002f3)
/usr/lib64/libdhcp4client-3.0.5.so.0 [0x400003abcbc] (0x400003abcbc)
/usr/lib64/libdhcp4client-3.0.5.so.0 [0x400003b0250] (0x400003b0250)
/usr/lib64/libdhcp4client-3.0.5.so.0 [0x400003b0864] (0x400003b0864)
/usr/lib64/libdhcp4client-3.0.5.so.0 [0x400003add2c] (0x400003add2c)
/usr/lib64/libdhcp4client-3.0.5.so.0 [0x400003c5eb4] (0x400003c5eb4)
/usr/lib64/libdhcp4client-3.0.5.so.0 [0x400003bb4f8] (0x400003bb4f8)
/usr/lib64/libdhcp4client-3.0.5.so.0 [0x400003e935c] (0x400003e935c)
/usr/lib64/libdhcp4client-3.0.5.so.0(dhcpv4_client-0x7cd6c) [0x400003b25b4] (0x)
/usr/lib64/libdhcp.so.1(do_dhcpv4-0x25f50) [0x400002f3428] (0x400002f3428)
/usr/lib64/libdhcp.so.1 [0x40000302088] (0x40000302088)
/usr/lib64/libdhcp.so.1(dhcp_nic-0x173f4) [0x40000302954] (0x40000302954)
/usr/lib64/libdhcp.so.1(pumpDhcpClassRun-0x15410) [0x400003049a8] (0x400003049a)
/bin/nash [0x10010fac] (0x10010fac)
/bin/nash [0x1000ec60] (0x1000ec60)
/bin/nash [0x1000f42c] (0x1000f42c)
/bin/nash [0x1000fcb4] (0x1000fcb4)
/lib64/libc.so.6 [0x4000069bdcc] (0x4000069bdcc)
/lib64/libc.so.6(__libc_start_main-0x166c60) [0x4000069c060] (0x4000069c060)
Kernel panic - not syncing: Attempted to kill init!



This problem also occurs when the following nash script is called. 

#!/bin/nash 
network --device eth0 --bootproto dhcp

But the dhcp request works fine if we just do a dhclient request.
Comment 1 Jochen Roth 2007-05-15 08:01:34 EDT
I forgot to mention that this problem occurs on a Cell/B.E. system. 
Comment 2 Jochen Roth 2007-05-21 02:23:59 EDT
Changing severity to "high" as the only available workaround for getting the
systems booted with fedora 7 is to apply static IP- addresses. 
Comment 3 Jochen Roth 2007-05-30 08:12:19 EDT
It looks like this problem is fixed at least with the latest development
versions of mkinitrd, nash, libdhcp4client, libdhcp


Thanks!
Comment 4 Jochen Roth 2007-05-30 12:32:51 EDT
OK, It works now on our systems. You can close the bug now. Thanks for the fix!

Note You need to log in before you can comment on or make changes to this bug.