RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1001403 - autofs gives "Failed to resolve server Name or service not known" when mounting from Isilon SmartConnect zone
Summary: autofs gives "Failed to resolve server Name or service not known" when mounti...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: autofs
Version: 6.4
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Ian Kent
QA Contact: Filesystem QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-27 01:37 UTC by Prakash Velayutham
Modified: 2017-12-06 12:49 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-06 12:49:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Prakash Velayutham 2013-08-27 01:37:40 UTC
Description of problem:

I have a NFS4 volume being served by EMC Isilon. I am able to mount this volume manually using "mount" command, but when I try to automount this volume using Isilon SmartConnect hostname, I get "mount.nfs4: Failed to resolve server x.x.x.x: Name or service not known" error. I am able to mount this same NFS4 volume using a static IP address though. I have other NFS4 volumes being served from NetApp via automounter which work fine. This issue seems to be very specific to Isilon SmartConnect zone.

Version-Release number of selected component (if applicable):

autofs-5.0.5-74.el6_4.x86_64
nfs-utils-1.2.3-36.el6.x86_64

How reproducible:

Very reproducible.

Steps to Reproduce:
1. Create a NFS volume in Isilon and assign a SmartConnect IP pool.
2. Create an automount entry to mount this volume using the SmartConnect zone name.
3. Try to access the automount location.

Actual results:

-bash-4.1$ ls -al /data/nfs4_test/
ls: cannot access /data/nfs4_test/: No such file or directory

Expected results:

-bash-4.1$ ls -al /data/nfs4_test/
total 10
d---rwx--- 4 root   wheel  81 Aug 22 14:47 ./
drwxr-xr-x 3 root   root    0 Aug 26 21:36 ../

Additional info:

Comment 2 Ian Kent 2013-08-27 01:58:41 UTC
Please send a full debug log by setting LOGGING="debug" in the
autofs configuration.

Make sure that syslog is recording facility daemon level debug
and greater to ensure we capture the log information.

Comment 3 Prakash Velayutham 2013-08-27 02:10:20 UTC
(In reply to Ian Kent from comment #2)
> Please send a full debug log by setting LOGGING="debug" in the
> autofs configuration.
> 
> Make sure that syslog is recording facility daemon level debug
> and greater to ensure we capture the log information.

Not sure if this is what you are looking for. I have sanitized. xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the pool from the SmartConnect zone.

Aug 26 22:05:04 node1 automount[42397]: handle_packet: type = 3
Aug 26 22:05:04 node1 automount[42397]: handle_packet_missing_indirect: token 1193, name nfs4_test, request pid 50212
Aug 26 22:05:04 node1 automount[42397]: attempting to mount entry /data/nfs4_test
Aug 26 22:05:04 node1 automount[42397]: lookup_mount: lookup(ldap): looking up nfs4_test
Aug 26 22:05:04 node1 automount[42397]: do_bind: lookup(ldap): auth_required: 8, sasl_mech PLAIN
Aug 26 22:05:04 node1 automount[42397]: do_bind: lookup(ldap): ldap simple bind returned 0Aug 26 22:05:04 node1 automount[42397]: lookup_one: lookup(ldap): searching for "(&(objectclass=nisObject)(|(cn=nfs4_test)(cn=/)(cn=\2A)))" under "CN=auto.data,CN=xxx,CN=automount,DC=dom2,DC=xxx,DC=xxx"
Aug 26 22:05:04 node1 automount[42397]: lookup_one: lookup(ldap): getting first entry for cn="nfs4_test"
Aug 26 22:05:04 node1 automount[42397]: lookup_one: lookup(ldap): examining first entry
Aug 26 22:05:04 node1 automount[42397]: validate_string_len: lookup(ldap): string nfs4_test encoded as nfs4_test
Aug 26 22:05:04 node1 automount[42397]: lookup_mount: lookup(ldap): nfs4_test -> -fstype=nfs4,quota,hard,nobrowse hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test
Aug 26 22:05:04 node1 automount[42397]: parse_mount: parse(sun): expanded entry: -fstype=nfs4,quota,hard,nobrowse hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test
Aug 26 22:05:04 node1 automount[42397]: parse_mount: parse(sun): gathered options: fstype=nfs4,quota,hard,nobrowse
Aug 26 22:05:04 node1 automount[42397]: parse_mount: parse(sun): dequote("hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test") -> hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test
Aug 26 22:05:04 node1 automount[42397]: parse_mount: parse(sun): core of entry: options=fstype=nfs4,quota,hard,nobrowse, loc=hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test
Aug 26 22:05:04 node1 automount[42397]: sun_mount: parse(sun): mounting root /data, mountpoint nfs4_test, what hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test, fstype nfs4, options quota,hard
Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): root=/data name=nfs4_test what=hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test, fstype=nfs4, options=quota,hard
Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): nfs options="quota,hard", nobind=0, nosymlink=0, ro=0
Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40
Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time: 0.000186
Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host hpc.xxx.xxx.xxx.xxx cost 185 weight 0
Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of hosts that support NFS4 over TCP
Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling mkdir_path /data/nfs4_test
Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test /data/nfs4_test
Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve server hpc.xxx.xxx.xxx.xxx: Name or service not known
Aug 26 22:05:04 node1 automount[42397]: mount(nfs): nfs: mount failure hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test on /data/nfs4_test
Aug 26 22:05:04 node1 automount[42397]: dev_ioctl_send_fail: token = 1193
Aug 26 22:05:04 node1 automount[42397]: failed to mount /data/nfs4_test
~

Please let me know if you need anything else.

Comment 4 Ian Kent 2013-08-27 03:18:59 UTC
(In reply to Prakash Velayutham from comment #3)
> (In reply to Ian Kent from comment #2)
> > Please send a full debug log by setting LOGGING="debug" in the
> > autofs configuration.
> > 
> > Make sure that syslog is recording facility daemon level debug
> > and greater to ensure we capture the log information.
> 
> Not sure if this is what you are looking for. I have sanitized.
> xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the
> pool from the SmartConnect zone.

Yep, that's what I need.

snip ...

> Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host
> hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40

Assuming the nn.nn.nn.nn is the correct address, it seems strange
that autofs was able to resolve the host name ...

> Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time:
> 0.000186
> Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host
> hpc.xxx.xxx.xxx.xxx cost 185 weight 0
> Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of
> hosts that support NFS4 over TCP
> Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling
> mkdir_path /data/nfs4_test
> Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling
> mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test
> /data/nfs4_test
> Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve
> server hpc.xxx.xxx.xxx.xxx: Name or service not known

but then mount.nfs4(8) wasn't able to resolve it. Certainly it looks
like the mount command is OK.

Steve, heard anything about name resolution problems with mount.nfs4(8)?
Any other thoughts on why this might be happening?

Ian

Comment 5 Prakash Velayutham 2013-08-27 03:44:36 UTC
(In reply to Ian Kent from comment #4)
> (In reply to Prakash Velayutham from comment #3)
> > (In reply to Ian Kent from comment #2)
> > > Please send a full debug log by setting LOGGING="debug" in the
> > > autofs configuration.
> > > 
> > > Make sure that syslog is recording facility daemon level debug
> > > and greater to ensure we capture the log information.
> > 
> > Not sure if this is what you are looking for. I have sanitized.
> > xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the
> > pool from the SmartConnect zone.
> 
> Yep, that's what I need.
> 
> snip ...
> 
> > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host
> > hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40
> 
> Assuming the nn.nn.nn.nn is the correct address, it seems strange
> that autofs was able to resolve the host name ...
> 
> > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time:
> > 0.000186
> > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host
> > hpc.xxx.xxx.xxx.xxx cost 185 weight 0
> > Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of
> > hosts that support NFS4 over TCP
> > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling
> > mkdir_path /data/nfs4_test
> > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling
> > mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test
> > /data/nfs4_test
> > Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve
> > server hpc.xxx.xxx.xxx.xxx: Name or service not known
> 
> but then mount.nfs4(8) wasn't able to resolve it. Certainly it looks
> like the mount command is OK.
> 
> Steve, heard anything about name resolution problems with mount.nfs4(8)?
> Any other thoughts on why this might be happening?
> 
> Ian

Just to give more info.

1. SELinux is disabled
2. Mount works when I manually enter
mount -t nfs4 hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS5_Test /mnt
3. Mount works when I have the following in the automount nisMapEntry
-fstype=nfs4,quota,hard,nobrowse mm.mm.mm.mm:/ifs/data/NFS4_Test
4. Mount does not work only when I have the hostname (and only with an Isilon SmartConnect zone, I think, because hostname resolution works when I automount off NetApp) in the nisMapEntry
-fstype=nfs4,quota,hard,nobrowse hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test

mm.mm.mm.mm above is one of the IPs in the IP address pool.

Thanks.

Comment 6 Prakash Velayutham 2013-08-30 01:38:04 UTC
(In reply to Prakash Velayutham from comment #5)
> (In reply to Ian Kent from comment #4)
> > (In reply to Prakash Velayutham from comment #3)
> > > (In reply to Ian Kent from comment #2)
> > > > Please send a full debug log by setting LOGGING="debug" in the
> > > > autofs configuration.
> > > > 
> > > > Make sure that syslog is recording facility daemon level debug
> > > > and greater to ensure we capture the log information.
> > > 
> > > Not sure if this is what you are looking for. I have sanitized.
> > > xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the
> > > pool from the SmartConnect zone.
> > 
> > Yep, that's what I need.
> > 
> > snip ...
> > 
> > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host
> > > hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40
> > 
> > Assuming the nn.nn.nn.nn is the correct address, it seems strange
> > that autofs was able to resolve the host name ...
> > 
> > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time:
> > > 0.000186
> > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host
> > > hpc.xxx.xxx.xxx.xxx cost 185 weight 0
> > > Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of
> > > hosts that support NFS4 over TCP
> > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling
> > > mkdir_path /data/nfs4_test
> > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling
> > > mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test
> > > /data/nfs4_test
> > > Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve
> > > server hpc.xxx.xxx.xxx.xxx: Name or service not known
> > 
> > but then mount.nfs4(8) wasn't able to resolve it. Certainly it looks
> > like the mount command is OK.
> > 
> > Steve, heard anything about name resolution problems with mount.nfs4(8)?
> > Any other thoughts on why this might be happening?
> > 
> > Ian
> 
> Just to give more info.
> 
> 1. SELinux is disabled
> 2. Mount works when I manually enter
> mount -t nfs4 hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS5_Test /mnt
> 3. Mount works when I have the following in the automount nisMapEntry
> -fstype=nfs4,quota,hard,nobrowse mm.mm.mm.mm:/ifs/data/NFS4_Test
> 4. Mount does not work only when I have the hostname (and only with an
> Isilon SmartConnect zone, I think, because hostname resolution works when I
> automount off NetApp) in the nisMapEntry
> -fstype=nfs4,quota,hard,nobrowse hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test
> 
> mm.mm.mm.mm above is one of the IPs in the IP address pool.
> 
> Thanks.

Wondering if there are any updates with this.

Thanks.

Comment 7 Prakash Velayutham 2013-09-05 16:09:10 UTC
(In reply to Ian Kent from comment #4)
> (In reply to Prakash Velayutham from comment #3)
> > (In reply to Ian Kent from comment #2)
> > > Please send a full debug log by setting LOGGING="debug" in the
> > > autofs configuration.
> > > 
> > > Make sure that syslog is recording facility daemon level debug
> > > and greater to ensure we capture the log information.
> > 
> > Not sure if this is what you are looking for. I have sanitized.
> > xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the
> > pool from the SmartConnect zone.
> 
> Yep, that's what I need.
> 
> snip ...
> 
> > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host
> > hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40
> 
> Assuming the nn.nn.nn.nn is the correct address, it seems strange
> that autofs was able to resolve the host name ...
> 
> > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time:
> > 0.000186
> > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host
> > hpc.xxx.xxx.xxx.xxx cost 185 weight 0
> > Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of
> > hosts that support NFS4 over TCP
> > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling
> > mkdir_path /data/nfs4_test
> > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling
> > mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test
> > /data/nfs4_test
> > Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve
> > server hpc.xxx.xxx.xxx.xxx: Name or service not known
> 
> but then mount.nfs4(8) wasn't able to resolve it. Certainly it looks
> like the mount command is OK.
> 
> Steve, heard anything about name resolution problems with mount.nfs4(8)?
> Any other thoughts on why this might be happening?
> 
> Ian

Hi Ian,

Any updates? Wondering if I am the only one seeing this issue.

Thanks,
Prakash

Comment 8 Ian Kent 2013-09-06 02:31:07 UTC
(In reply to Prakash Velayutham from comment #7)
> (In reply to Ian Kent from comment #4)
> > (In reply to Prakash Velayutham from comment #3)
> > > (In reply to Ian Kent from comment #2)
> > > > Please send a full debug log by setting LOGGING="debug" in the
> > > > autofs configuration.
> > > > 
> > > > Make sure that syslog is recording facility daemon level debug
> > > > and greater to ensure we capture the log information.
> > > 
> > > Not sure if this is what you are looking for. I have sanitized.
> > > xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the
> > > pool from the SmartConnect zone.
> > 
> > Yep, that's what I need.
> > 
> > snip ...
> > 
> > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host
> > > hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40
> > 
> > Assuming the nn.nn.nn.nn is the correct address, it seems strange
> > that autofs was able to resolve the host name ...
> > 
> > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time:
> > > 0.000186
> > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host
> > > hpc.xxx.xxx.xxx.xxx cost 185 weight 0
> > > Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of
> > > hosts that support NFS4 over TCP
> > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling
> > > mkdir_path /data/nfs4_test
> > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling
> > > mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test
> > > /data/nfs4_test
> > > Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve
> > > server hpc.xxx.xxx.xxx.xxx: Name or service not known
> > 
> > but then mount.nfs4(8) wasn't able to resolve it. Certainly it looks
> > like the mount command is OK.
> > 
> > Steve, heard anything about name resolution problems with mount.nfs4(8)?
> > Any other thoughts on why this might be happening?
> > 
> > Ian
> 
> Hi Ian,
> 
> Any updates? Wondering if I am the only one seeing this issue.

TBH I don't know were to look since the name resolution appears to
be working in autofs and the mount command generated by autofs also
looks OK.

Ian

Comment 9 Prakash Velayutham 2013-09-20 19:24:58 UTC
(In reply to Ian Kent from comment #8)
> (In reply to Prakash Velayutham from comment #7)
> > (In reply to Ian Kent from comment #4)
> > > (In reply to Prakash Velayutham from comment #3)
> > > > (In reply to Ian Kent from comment #2)
> > > > > Please send a full debug log by setting LOGGING="debug" in the
> > > > > autofs configuration.
> > > > > 
> > > > > Make sure that syslog is recording facility daemon level debug
> > > > > and greater to ensure we capture the log information.
> > > > 
> > > > Not sure if this is what you are looking for. I have sanitized.
> > > > xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the
> > > > pool from the SmartConnect zone.
> > > 
> > > Yep, that's what I need.
> > > 
> > > snip ...
> > > 
> > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host
> > > > hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40
> > > 
> > > Assuming the nn.nn.nn.nn is the correct address, it seems strange
> > > that autofs was able to resolve the host name ...
> > > 
> > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time:
> > > > 0.000186
> > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host
> > > > hpc.xxx.xxx.xxx.xxx cost 185 weight 0
> > > > Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of
> > > > hosts that support NFS4 over TCP
> > > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling
> > > > mkdir_path /data/nfs4_test
> > > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling
> > > > mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test
> > > > /data/nfs4_test
> > > > Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve
> > > > server hpc.xxx.xxx.xxx.xxx: Name or service not known
> > > 
> > > but then mount.nfs4(8) wasn't able to resolve it. Certainly it looks
> > > like the mount command is OK.
> > > 
> > > Steve, heard anything about name resolution problems with mount.nfs4(8)?
> > > Any other thoughts on why this might be happening?
> > > 
> > > Ian
> > 
> > Hi Ian,
> > 
> > Any updates? Wondering if I am the only one seeing this issue.
> 
> TBH I don't know were to look since the name resolution appears to
> be working in autofs and the mount command generated by autofs also
> looks OK.
> 
> Ian

Is there any other debugging method/tool I can use to get more data that will help you shed some light on this. Currently, I am down to using one node in the Isilon cluster and that is not efficient at all.

Thanks,
Prakash

Comment 10 RHEL Program Management 2013-10-14 02:33:05 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 12 Steve Dickson 2015-12-01 16:33:30 UTC
(In reply to Ian Kent from comment #4)
> (In reply to Prakash Velayutham from comment #3)
> > (In reply to Ian Kent from comment #2)
> > > Please send a full debug log by setting LOGGING="debug" in the
> > > autofs configuration.
> > > 
> > > Make sure that syslog is recording facility daemon level debug
> > > and greater to ensure we capture the log information.
> > 
> > Not sure if this is what you are looking for. I have sanitized.
> > xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the
> > pool from the SmartConnect zone.
> 
> Yep, that's what I need.
> 
> snip ...
> 
> > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host
> > hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40
> 
> Assuming the nn.nn.nn.nn is the correct address, it seems strange
> that autofs was able to resolve the host name ...
Does 'host nn.nn.nn.nn' resolve? 

> 
> > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time:
> > 0.000186
> > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host
> > hpc.xxx.xxx.xxx.xxx cost 185 weight 0
> > Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of
> > hosts that support NFS4 over TCP
> > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling
> > mkdir_path /data/nfs4_test
> > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling
> > mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test
> > /data/nfs4_test
> > Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve
> > server hpc.xxx.xxx.xxx.xxx: Name or service not known
> 
> but then mount.nfs4(8) wasn't able to resolve it. Certainly it looks
> like the mount command is OK.
> 
> Steve, heard anything about name resolution problems with mount.nfs4(8)?
No not that I'm aware of... 

> Any other thoughts on why this might be happening?
I think its definitely a DNS since mount uses the getaddrbyXXX() 
routes to resolve to resolve address... 

Maybe a network trace of the failure might help?

Comment 13 Jan Kurik 2017-12-06 12:49:48 UTC
Red Hat Enterprise Linux 6 is in the Production 3 Phase. During the Production 3 Phase, Critical impact Security Advisories (RHSAs) and selected Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available.

The official life cycle policy can be reviewed here:

http://redhat.com/rhel/lifecycle

This issue does not meet the inclusion criteria for the Production 3 Phase and will be marked as CLOSED/WONTFIX. If this remains a critical requirement, please contact Red Hat Customer Support to request a re-evaluation of the issue, citing a clear business justification. Note that a strong business justification will be required for re-evaluation. Red Hat Customer Support can be contacted via the Red Hat Customer Portal at the following URL:

https://access.redhat.com/


Note You need to log in before you can comment on or make changes to this bug.