Hide Forgot
Description of problem: I have a NFS4 volume being served by EMC Isilon. I am able to mount this volume manually using "mount" command, but when I try to automount this volume using Isilon SmartConnect hostname, I get "mount.nfs4: Failed to resolve server x.x.x.x: Name or service not known" error. I am able to mount this same NFS4 volume using a static IP address though. I have other NFS4 volumes being served from NetApp via automounter which work fine. This issue seems to be very specific to Isilon SmartConnect zone. Version-Release number of selected component (if applicable): autofs-5.0.5-74.el6_4.x86_64 nfs-utils-1.2.3-36.el6.x86_64 How reproducible: Very reproducible. Steps to Reproduce: 1. Create a NFS volume in Isilon and assign a SmartConnect IP pool. 2. Create an automount entry to mount this volume using the SmartConnect zone name. 3. Try to access the automount location. Actual results: -bash-4.1$ ls -al /data/nfs4_test/ ls: cannot access /data/nfs4_test/: No such file or directory Expected results: -bash-4.1$ ls -al /data/nfs4_test/ total 10 d---rwx--- 4 root wheel 81 Aug 22 14:47 ./ drwxr-xr-x 3 root root 0 Aug 26 21:36 ../ Additional info:
Please send a full debug log by setting LOGGING="debug" in the autofs configuration. Make sure that syslog is recording facility daemon level debug and greater to ensure we capture the log information.
(In reply to Ian Kent from comment #2) > Please send a full debug log by setting LOGGING="debug" in the > autofs configuration. > > Make sure that syslog is recording facility daemon level debug > and greater to ensure we capture the log information. Not sure if this is what you are looking for. I have sanitized. xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the pool from the SmartConnect zone. Aug 26 22:05:04 node1 automount[42397]: handle_packet: type = 3 Aug 26 22:05:04 node1 automount[42397]: handle_packet_missing_indirect: token 1193, name nfs4_test, request pid 50212 Aug 26 22:05:04 node1 automount[42397]: attempting to mount entry /data/nfs4_test Aug 26 22:05:04 node1 automount[42397]: lookup_mount: lookup(ldap): looking up nfs4_test Aug 26 22:05:04 node1 automount[42397]: do_bind: lookup(ldap): auth_required: 8, sasl_mech PLAIN Aug 26 22:05:04 node1 automount[42397]: do_bind: lookup(ldap): ldap simple bind returned 0Aug 26 22:05:04 node1 automount[42397]: lookup_one: lookup(ldap): searching for "(&(objectclass=nisObject)(|(cn=nfs4_test)(cn=/)(cn=\2A)))" under "CN=auto.data,CN=xxx,CN=automount,DC=dom2,DC=xxx,DC=xxx" Aug 26 22:05:04 node1 automount[42397]: lookup_one: lookup(ldap): getting first entry for cn="nfs4_test" Aug 26 22:05:04 node1 automount[42397]: lookup_one: lookup(ldap): examining first entry Aug 26 22:05:04 node1 automount[42397]: validate_string_len: lookup(ldap): string nfs4_test encoded as nfs4_test Aug 26 22:05:04 node1 automount[42397]: lookup_mount: lookup(ldap): nfs4_test -> -fstype=nfs4,quota,hard,nobrowse hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test Aug 26 22:05:04 node1 automount[42397]: parse_mount: parse(sun): expanded entry: -fstype=nfs4,quota,hard,nobrowse hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test Aug 26 22:05:04 node1 automount[42397]: parse_mount: parse(sun): gathered options: fstype=nfs4,quota,hard,nobrowse Aug 26 22:05:04 node1 automount[42397]: parse_mount: parse(sun): dequote("hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test") -> hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test Aug 26 22:05:04 node1 automount[42397]: parse_mount: parse(sun): core of entry: options=fstype=nfs4,quota,hard,nobrowse, loc=hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test Aug 26 22:05:04 node1 automount[42397]: sun_mount: parse(sun): mounting root /data, mountpoint nfs4_test, what hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test, fstype nfs4, options quota,hard Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): root=/data name=nfs4_test what=hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test, fstype=nfs4, options=quota,hard Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): nfs options="quota,hard", nobind=0, nosymlink=0, ro=0 Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40 Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time: 0.000186 Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host hpc.xxx.xxx.xxx.xxx cost 185 weight 0 Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of hosts that support NFS4 over TCP Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling mkdir_path /data/nfs4_test Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test /data/nfs4_test Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve server hpc.xxx.xxx.xxx.xxx: Name or service not known Aug 26 22:05:04 node1 automount[42397]: mount(nfs): nfs: mount failure hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test on /data/nfs4_test Aug 26 22:05:04 node1 automount[42397]: dev_ioctl_send_fail: token = 1193 Aug 26 22:05:04 node1 automount[42397]: failed to mount /data/nfs4_test ~ Please let me know if you need anything else.
(In reply to Prakash Velayutham from comment #3) > (In reply to Ian Kent from comment #2) > > Please send a full debug log by setting LOGGING="debug" in the > > autofs configuration. > > > > Make sure that syslog is recording facility daemon level debug > > and greater to ensure we capture the log information. > > Not sure if this is what you are looking for. I have sanitized. > xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the > pool from the SmartConnect zone. Yep, that's what I need. snip ... > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host > hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40 Assuming the nn.nn.nn.nn is the correct address, it seems strange that autofs was able to resolve the host name ... > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time: > 0.000186 > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host > hpc.xxx.xxx.xxx.xxx cost 185 weight 0 > Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of > hosts that support NFS4 over TCP > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > mkdir_path /data/nfs4_test > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test > /data/nfs4_test > Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve > server hpc.xxx.xxx.xxx.xxx: Name or service not known but then mount.nfs4(8) wasn't able to resolve it. Certainly it looks like the mount command is OK. Steve, heard anything about name resolution problems with mount.nfs4(8)? Any other thoughts on why this might be happening? Ian
(In reply to Ian Kent from comment #4) > (In reply to Prakash Velayutham from comment #3) > > (In reply to Ian Kent from comment #2) > > > Please send a full debug log by setting LOGGING="debug" in the > > > autofs configuration. > > > > > > Make sure that syslog is recording facility daemon level debug > > > and greater to ensure we capture the log information. > > > > Not sure if this is what you are looking for. I have sanitized. > > xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the > > pool from the SmartConnect zone. > > Yep, that's what I need. > > snip ... > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host > > hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40 > > Assuming the nn.nn.nn.nn is the correct address, it seems strange > that autofs was able to resolve the host name ... > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time: > > 0.000186 > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host > > hpc.xxx.xxx.xxx.xxx cost 185 weight 0 > > Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of > > hosts that support NFS4 over TCP > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > mkdir_path /data/nfs4_test > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test > > /data/nfs4_test > > Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve > > server hpc.xxx.xxx.xxx.xxx: Name or service not known > > but then mount.nfs4(8) wasn't able to resolve it. Certainly it looks > like the mount command is OK. > > Steve, heard anything about name resolution problems with mount.nfs4(8)? > Any other thoughts on why this might be happening? > > Ian Just to give more info. 1. SELinux is disabled 2. Mount works when I manually enter mount -t nfs4 hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS5_Test /mnt 3. Mount works when I have the following in the automount nisMapEntry -fstype=nfs4,quota,hard,nobrowse mm.mm.mm.mm:/ifs/data/NFS4_Test 4. Mount does not work only when I have the hostname (and only with an Isilon SmartConnect zone, I think, because hostname resolution works when I automount off NetApp) in the nisMapEntry -fstype=nfs4,quota,hard,nobrowse hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test mm.mm.mm.mm above is one of the IPs in the IP address pool. Thanks.
(In reply to Prakash Velayutham from comment #5) > (In reply to Ian Kent from comment #4) > > (In reply to Prakash Velayutham from comment #3) > > > (In reply to Ian Kent from comment #2) > > > > Please send a full debug log by setting LOGGING="debug" in the > > > > autofs configuration. > > > > > > > > Make sure that syslog is recording facility daemon level debug > > > > and greater to ensure we capture the log information. > > > > > > Not sure if this is what you are looking for. I have sanitized. > > > xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the > > > pool from the SmartConnect zone. > > > > Yep, that's what I need. > > > > snip ... > > > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host > > > hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40 > > > > Assuming the nn.nn.nn.nn is the correct address, it seems strange > > that autofs was able to resolve the host name ... > > > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time: > > > 0.000186 > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host > > > hpc.xxx.xxx.xxx.xxx cost 185 weight 0 > > > Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of > > > hosts that support NFS4 over TCP > > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > > mkdir_path /data/nfs4_test > > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > > mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test > > > /data/nfs4_test > > > Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve > > > server hpc.xxx.xxx.xxx.xxx: Name or service not known > > > > but then mount.nfs4(8) wasn't able to resolve it. Certainly it looks > > like the mount command is OK. > > > > Steve, heard anything about name resolution problems with mount.nfs4(8)? > > Any other thoughts on why this might be happening? > > > > Ian > > Just to give more info. > > 1. SELinux is disabled > 2. Mount works when I manually enter > mount -t nfs4 hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS5_Test /mnt > 3. Mount works when I have the following in the automount nisMapEntry > -fstype=nfs4,quota,hard,nobrowse mm.mm.mm.mm:/ifs/data/NFS4_Test > 4. Mount does not work only when I have the hostname (and only with an > Isilon SmartConnect zone, I think, because hostname resolution works when I > automount off NetApp) in the nisMapEntry > -fstype=nfs4,quota,hard,nobrowse hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test > > mm.mm.mm.mm above is one of the IPs in the IP address pool. > > Thanks. Wondering if there are any updates with this. Thanks.
(In reply to Ian Kent from comment #4) > (In reply to Prakash Velayutham from comment #3) > > (In reply to Ian Kent from comment #2) > > > Please send a full debug log by setting LOGGING="debug" in the > > > autofs configuration. > > > > > > Make sure that syslog is recording facility daemon level debug > > > and greater to ensure we capture the log information. > > > > Not sure if this is what you are looking for. I have sanitized. > > xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the > > pool from the SmartConnect zone. > > Yep, that's what I need. > > snip ... > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host > > hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40 > > Assuming the nn.nn.nn.nn is the correct address, it seems strange > that autofs was able to resolve the host name ... > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time: > > 0.000186 > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host > > hpc.xxx.xxx.xxx.xxx cost 185 weight 0 > > Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of > > hosts that support NFS4 over TCP > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > mkdir_path /data/nfs4_test > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test > > /data/nfs4_test > > Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve > > server hpc.xxx.xxx.xxx.xxx: Name or service not known > > but then mount.nfs4(8) wasn't able to resolve it. Certainly it looks > like the mount command is OK. > > Steve, heard anything about name resolution problems with mount.nfs4(8)? > Any other thoughts on why this might be happening? > > Ian Hi Ian, Any updates? Wondering if I am the only one seeing this issue. Thanks, Prakash
(In reply to Prakash Velayutham from comment #7) > (In reply to Ian Kent from comment #4) > > (In reply to Prakash Velayutham from comment #3) > > > (In reply to Ian Kent from comment #2) > > > > Please send a full debug log by setting LOGGING="debug" in the > > > > autofs configuration. > > > > > > > > Make sure that syslog is recording facility daemon level debug > > > > and greater to ensure we capture the log information. > > > > > > Not sure if this is what you are looking for. I have sanitized. > > > xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the > > > pool from the SmartConnect zone. > > > > Yep, that's what I need. > > > > snip ... > > > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host > > > hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40 > > > > Assuming the nn.nn.nn.nn is the correct address, it seems strange > > that autofs was able to resolve the host name ... > > > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time: > > > 0.000186 > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host > > > hpc.xxx.xxx.xxx.xxx cost 185 weight 0 > > > Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of > > > hosts that support NFS4 over TCP > > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > > mkdir_path /data/nfs4_test > > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > > mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test > > > /data/nfs4_test > > > Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve > > > server hpc.xxx.xxx.xxx.xxx: Name or service not known > > > > but then mount.nfs4(8) wasn't able to resolve it. Certainly it looks > > like the mount command is OK. > > > > Steve, heard anything about name resolution problems with mount.nfs4(8)? > > Any other thoughts on why this might be happening? > > > > Ian > > Hi Ian, > > Any updates? Wondering if I am the only one seeing this issue. TBH I don't know were to look since the name resolution appears to be working in autofs and the mount command generated by autofs also looks OK. Ian
(In reply to Ian Kent from comment #8) > (In reply to Prakash Velayutham from comment #7) > > (In reply to Ian Kent from comment #4) > > > (In reply to Prakash Velayutham from comment #3) > > > > (In reply to Ian Kent from comment #2) > > > > > Please send a full debug log by setting LOGGING="debug" in the > > > > > autofs configuration. > > > > > > > > > > Make sure that syslog is recording facility daemon level debug > > > > > and greater to ensure we capture the log information. > > > > > > > > Not sure if this is what you are looking for. I have sanitized. > > > > xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the > > > > pool from the SmartConnect zone. > > > > > > Yep, that's what I need. > > > > > > snip ... > > > > > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host > > > > hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40 > > > > > > Assuming the nn.nn.nn.nn is the correct address, it seems strange > > > that autofs was able to resolve the host name ... > > > > > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time: > > > > 0.000186 > > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host > > > > hpc.xxx.xxx.xxx.xxx cost 185 weight 0 > > > > Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of > > > > hosts that support NFS4 over TCP > > > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > > > mkdir_path /data/nfs4_test > > > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > > > mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test > > > > /data/nfs4_test > > > > Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve > > > > server hpc.xxx.xxx.xxx.xxx: Name or service not known > > > > > > but then mount.nfs4(8) wasn't able to resolve it. Certainly it looks > > > like the mount command is OK. > > > > > > Steve, heard anything about name resolution problems with mount.nfs4(8)? > > > Any other thoughts on why this might be happening? > > > > > > Ian > > > > Hi Ian, > > > > Any updates? Wondering if I am the only one seeing this issue. > > TBH I don't know were to look since the name resolution appears to > be working in autofs and the mount command generated by autofs also > looks OK. > > Ian Is there any other debugging method/tool I can use to get more data that will help you shed some light on this. Currently, I am down to using one node in the Isilon cluster and that is not efficient at all. Thanks, Prakash
This request was not resolved in time for the current release. Red Hat invites you to ask your support representative to propose this request, if still desired, for consideration in the next release of Red Hat Enterprise Linux.
(In reply to Ian Kent from comment #4) > (In reply to Prakash Velayutham from comment #3) > > (In reply to Ian Kent from comment #2) > > > Please send a full debug log by setting LOGGING="debug" in the > > > autofs configuration. > > > > > > Make sure that syslog is recording facility daemon level debug > > > and greater to ensure we capture the log information. > > > > Not sure if this is what you are looking for. I have sanitized. > > xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the > > pool from the SmartConnect zone. > > Yep, that's what I need. > > snip ... > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host > > hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40 > > Assuming the nn.nn.nn.nn is the correct address, it seems strange > that autofs was able to resolve the host name ... Does 'host nn.nn.nn.nn' resolve? > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time: > > 0.000186 > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host > > hpc.xxx.xxx.xxx.xxx cost 185 weight 0 > > Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of > > hosts that support NFS4 over TCP > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > mkdir_path /data/nfs4_test > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test > > /data/nfs4_test > > Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve > > server hpc.xxx.xxx.xxx.xxx: Name or service not known > > but then mount.nfs4(8) wasn't able to resolve it. Certainly it looks > like the mount command is OK. > > Steve, heard anything about name resolution problems with mount.nfs4(8)? No not that I'm aware of... > Any other thoughts on why this might be happening? I think its definitely a DNS since mount uses the getaddrbyXXX() routes to resolve to resolve address... Maybe a network trace of the failure might help?
Red Hat Enterprise Linux 6 is in the Production 3 Phase. During the Production 3 Phase, Critical impact Security Advisories (RHSAs) and selected Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available. The official life cycle policy can be reviewed here: http://redhat.com/rhel/lifecycle This issue does not meet the inclusion criteria for the Production 3 Phase and will be marked as CLOSED/WONTFIX. If this remains a critical requirement, please contact Red Hat Customer Support to request a re-evaluation of the issue, citing a clear business justification. Note that a strong business justification will be required for re-evaluation. Red Hat Customer Support can be contacted via the Red Hat Customer Portal at the following URL: https://access.redhat.com/