Bug 51964 - nfs-utils-0.3.1-0.6.x.1: mount failed in kickstart; nfs version 3 problem
Summary: nfs-utils-0.3.1-0.6.x.1: mount failed in kickstart; nfs version 3 problem
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Linux
Classification: Retired
Component: nfs-utils
Version: 6.2
Hardware: i386
OS: Linux
medium
low
Target Milestone: ---
Assignee: Pete Zaitcev
QA Contact: Brian Brock
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2001-08-17 15:35 UTC by Arnoud Witt
Modified: 2007-04-18 16:36 UTC (History)
0 users

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2004-04-19 15:54:58 UTC
Embargoed:


Attachments (Terms of Use)

Description Arnoud Witt 2001-08-17 15:35:48 UTC
From Bugzilla Helper:
User-Agent: Mozilla/4.0 (compatible; MSIE 5.0; Windows NT; DigExt)

Description of problem:
When I try kickstart install from a RH 6.2 install server with 
kernel-2.2.19-6.2.7 and nfs-utils-0.3.1-0.6.x.1 installed, using 
kickstart floppies of different versions (6.2, 7.0, and 7.1) mount
fails when using the 7.0 version. 
During the post-install part of the 7.1 install, it takes exactly 5 
minutes for mounting to succeed. When I use the nolock mount-option 
in de post-install part, there is no delay.

Solution to the above problems seems:
On the install-server (RH 6.2, kernel-2.2.19-6.2.7, 
nfs-utils-0.3.1-0.6.x.1) I changed the script /etc/rc.d/init.d/nfs by 
adding the line

RPCMOUNTOPTS="--no-nfs-version 3"

after the checks for kernelversion and release. With this I forced 
use of NFS version less than 3 in all cases.

So to my opinion NFSv3 is not handled properly by rpc.mountd from 
nfs-utils-0.3.1-0.6.x.1

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Boot from a bootnet.img floppy
2. At Boot: type: linux ks=nfs:install-server:/kickstart/<version>/ks.cfg
3. 
	

Actual Results:  Mount failed

Expected Results:  Mount succeeded

Additional info:

Comment 1 Pete Zaitcev 2004-04-19 15:54:58 UTC
I'm sorry to report, this fell over the horizon due to
manpower constraints, closing.



Note You need to log in before you can comment on or make changes to this bug.