Description of problem: I have setup a kickstart network, ks.cfg files live on the same NFS server as the RHL9 install media. I've used both the first RHL9 install CD and PXE boot (kernel images provided on the first CD), and using the following boot command at the syslinux prompt: linux ks=nfs:192.168.2.3:/ks/ks9.cfg.rm The problem is after the kickstart file is read, and anaconda tries to mount the NFS directory where the install media is found, anaconda will receive a signal 11 quit. Attached you will find the ks9.cfg.rm file. If I use the first CD, in conjunction with a floppy disk, and place the same exact kickstart file on the floppy disk, the install proceeds as expected, no problems. We've duplicated this issue across 2 of the same systems, but can't duplicate it on any other system. Specs of the system in question are: 1x p4 2.4ghz CPU w/ 533FSB Intel 845 chipset Motherboard 2x PC2100 DDR memory 1x Seagate 120gig hdd onboard Intel e100 network card (used for kickstart) See URL for further details. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. boot system w/ CD or PXE 2. attempt kickstart install, ks file being on NFS server 3. watch the signal 11 happen Actual Results: Anaconda will quit with a signal 11 Expected Results: Anaconda will continue to kickstart the system. Additional info: If necessary, we (Pogo Linux) may be able to send one of these systems over to Red Hat for additional testing. This issue kinda puts a major crimp in our production practices.
Created attachment 91047 [details] RHL9 kickstart config file for rackmount servers Of note, on the kickstart NFS server, the full path name to the RedHat/ tree is /var/kickstart/ks9/ with /ks9 being a symlink to this directory. Kickstart config files live in /var/kickstart/ and /ks is a symlink to this directory as well. /etc/exports has /var/kickstart/ as the share.
We have seen the same problem; twin xeon processors, twin e1000 ethernet. If you want to avoid the problem while it is fixed you can use http instead, linux ks=http://website/kickstartfile works for us with an appropriate website, with url --url http://website/pathtorh9 in the kickstart file.
I cannot reproduce this issue on a variety of systems here. I would recommend using http to pull your ks configs.
In our case the NFS server was a netapps box, (the http server was via a Sun), there was also probably a network mismatch between the router (half duplex) and the system (full duplex). We may well try to reproduce the problem without the network weirdness.
Just as a point of note, this problem is reproducable here, using RHEL3, single Xeon (though hyperthreaded, so smp) server, e1000 nic (IBM xserver 335). Same bug. If we NFS/PXE install, we'll get a signal 11 right after the drivers are loaded and it attempts to pull down the kickstart.
I'm encountering the same problem in RHEL WS3 u2 on two different systems. One is a Dell PowerEdge 1750 using a BCM5704 ethernet controller (bcm5700 driver) which connects to a NetApp box. The other is VMware 4.5 with a virtual AMD 79c970 ethernet controller (pcnet32 driver) which connects directly to the host system. Both try to load the kickstart file over NFS. Running ethereal on both of the networks shows that the SEGV fault occurs before the kickstart file is read -- even before the NFS mount is complete. There is an ARP request/reply, then a series of sunrpc packets followed by a Portmap DUMP call/reply, another pair of sunrpc packets, a Portmap GETPORT call, another pair of sunrpc packets, the GETPORT reply, and finally one more Portmap GETPORT call/reply. At that point it crashes. I suspect the problem has something to do with the PXELinux or DHCP configuration, because I seem to recall having the thing working at one time...
Ha! I found the silly cause of the problem. The NFS server holding the kickstart file wasn't running on either of the networks. Fixing that, the MOUNT call appears in between the Portmap DUMP and the first Portmap GETPORT (on both the real system and VMware). Perhaps the other posters on this bug could check their setups and confirm whether NFS was running ("service nfs status")? If that's the case, it looks like this is simply a problem of robustness. Anaconda should check to see if NFS services are available on the remote server, and if not, show an appropriate error message and either exit gracefully or enter interactive mode.