Bug 1001403
Summary: | autofs gives "Failed to resolve server Name or service not known" when mounting from Isilon SmartConnect zone | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Prakash Velayutham <prakash.velayutham> |
Component: | autofs | Assignee: | Ian Kent <ikent> |
Status: | CLOSED WONTFIX | QA Contact: | Filesystem QE <fs-qe> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 6.4 | CC: | ikent, prakash.velayutham, steved, swhiteho, xzhou |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-12-06 12:49:48 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Prakash Velayutham
2013-08-27 01:37:40 UTC
Please send a full debug log by setting LOGGING="debug" in the autofs configuration. Make sure that syslog is recording facility daemon level debug and greater to ensure we capture the log information. (In reply to Ian Kent from comment #2) > Please send a full debug log by setting LOGGING="debug" in the > autofs configuration. > > Make sure that syslog is recording facility daemon level debug > and greater to ensure we capture the log information. Not sure if this is what you are looking for. I have sanitized. xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the pool from the SmartConnect zone. Aug 26 22:05:04 node1 automount[42397]: handle_packet: type = 3 Aug 26 22:05:04 node1 automount[42397]: handle_packet_missing_indirect: token 1193, name nfs4_test, request pid 50212 Aug 26 22:05:04 node1 automount[42397]: attempting to mount entry /data/nfs4_test Aug 26 22:05:04 node1 automount[42397]: lookup_mount: lookup(ldap): looking up nfs4_test Aug 26 22:05:04 node1 automount[42397]: do_bind: lookup(ldap): auth_required: 8, sasl_mech PLAIN Aug 26 22:05:04 node1 automount[42397]: do_bind: lookup(ldap): ldap simple bind returned 0Aug 26 22:05:04 node1 automount[42397]: lookup_one: lookup(ldap): searching for "(&(objectclass=nisObject)(|(cn=nfs4_test)(cn=/)(cn=\2A)))" under "CN=auto.data,CN=xxx,CN=automount,DC=dom2,DC=xxx,DC=xxx" Aug 26 22:05:04 node1 automount[42397]: lookup_one: lookup(ldap): getting first entry for cn="nfs4_test" Aug 26 22:05:04 node1 automount[42397]: lookup_one: lookup(ldap): examining first entry Aug 26 22:05:04 node1 automount[42397]: validate_string_len: lookup(ldap): string nfs4_test encoded as nfs4_test Aug 26 22:05:04 node1 automount[42397]: lookup_mount: lookup(ldap): nfs4_test -> -fstype=nfs4,quota,hard,nobrowse hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test Aug 26 22:05:04 node1 automount[42397]: parse_mount: parse(sun): expanded entry: -fstype=nfs4,quota,hard,nobrowse hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test Aug 26 22:05:04 node1 automount[42397]: parse_mount: parse(sun): gathered options: fstype=nfs4,quota,hard,nobrowse Aug 26 22:05:04 node1 automount[42397]: parse_mount: parse(sun): dequote("hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test") -> hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test Aug 26 22:05:04 node1 automount[42397]: parse_mount: parse(sun): core of entry: options=fstype=nfs4,quota,hard,nobrowse, loc=hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test Aug 26 22:05:04 node1 automount[42397]: sun_mount: parse(sun): mounting root /data, mountpoint nfs4_test, what hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test, fstype nfs4, options quota,hard Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): root=/data name=nfs4_test what=hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test, fstype=nfs4, options=quota,hard Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): nfs options="quota,hard", nobind=0, nosymlink=0, ro=0 Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40 Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time: 0.000186 Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host hpc.xxx.xxx.xxx.xxx cost 185 weight 0 Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of hosts that support NFS4 over TCP Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling mkdir_path /data/nfs4_test Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test /data/nfs4_test Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve server hpc.xxx.xxx.xxx.xxx: Name or service not known Aug 26 22:05:04 node1 automount[42397]: mount(nfs): nfs: mount failure hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test on /data/nfs4_test Aug 26 22:05:04 node1 automount[42397]: dev_ioctl_send_fail: token = 1193 Aug 26 22:05:04 node1 automount[42397]: failed to mount /data/nfs4_test ~ Please let me know if you need anything else. (In reply to Prakash Velayutham from comment #3) > (In reply to Ian Kent from comment #2) > > Please send a full debug log by setting LOGGING="debug" in the > > autofs configuration. > > > > Make sure that syslog is recording facility daemon level debug > > and greater to ensure we capture the log information. > > Not sure if this is what you are looking for. I have sanitized. > xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the > pool from the SmartConnect zone. Yep, that's what I need. snip ... > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host > hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40 Assuming the nn.nn.nn.nn is the correct address, it seems strange that autofs was able to resolve the host name ... > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time: > 0.000186 > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host > hpc.xxx.xxx.xxx.xxx cost 185 weight 0 > Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of > hosts that support NFS4 over TCP > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > mkdir_path /data/nfs4_test > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test > /data/nfs4_test > Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve > server hpc.xxx.xxx.xxx.xxx: Name or service not known but then mount.nfs4(8) wasn't able to resolve it. Certainly it looks like the mount command is OK. Steve, heard anything about name resolution problems with mount.nfs4(8)? Any other thoughts on why this might be happening? Ian (In reply to Ian Kent from comment #4) > (In reply to Prakash Velayutham from comment #3) > > (In reply to Ian Kent from comment #2) > > > Please send a full debug log by setting LOGGING="debug" in the > > > autofs configuration. > > > > > > Make sure that syslog is recording facility daemon level debug > > > and greater to ensure we capture the log information. > > > > Not sure if this is what you are looking for. I have sanitized. > > xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the > > pool from the SmartConnect zone. > > Yep, that's what I need. > > snip ... > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host > > hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40 > > Assuming the nn.nn.nn.nn is the correct address, it seems strange > that autofs was able to resolve the host name ... > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time: > > 0.000186 > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host > > hpc.xxx.xxx.xxx.xxx cost 185 weight 0 > > Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of > > hosts that support NFS4 over TCP > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > mkdir_path /data/nfs4_test > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test > > /data/nfs4_test > > Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve > > server hpc.xxx.xxx.xxx.xxx: Name or service not known > > but then mount.nfs4(8) wasn't able to resolve it. Certainly it looks > like the mount command is OK. > > Steve, heard anything about name resolution problems with mount.nfs4(8)? > Any other thoughts on why this might be happening? > > Ian Just to give more info. 1. SELinux is disabled 2. Mount works when I manually enter mount -t nfs4 hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS5_Test /mnt 3. Mount works when I have the following in the automount nisMapEntry -fstype=nfs4,quota,hard,nobrowse mm.mm.mm.mm:/ifs/data/NFS4_Test 4. Mount does not work only when I have the hostname (and only with an Isilon SmartConnect zone, I think, because hostname resolution works when I automount off NetApp) in the nisMapEntry -fstype=nfs4,quota,hard,nobrowse hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test mm.mm.mm.mm above is one of the IPs in the IP address pool. Thanks. (In reply to Prakash Velayutham from comment #5) > (In reply to Ian Kent from comment #4) > > (In reply to Prakash Velayutham from comment #3) > > > (In reply to Ian Kent from comment #2) > > > > Please send a full debug log by setting LOGGING="debug" in the > > > > autofs configuration. > > > > > > > > Make sure that syslog is recording facility daemon level debug > > > > and greater to ensure we capture the log information. > > > > > > Not sure if this is what you are looking for. I have sanitized. > > > xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the > > > pool from the SmartConnect zone. > > > > Yep, that's what I need. > > > > snip ... > > > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host > > > hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40 > > > > Assuming the nn.nn.nn.nn is the correct address, it seems strange > > that autofs was able to resolve the host name ... > > > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time: > > > 0.000186 > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host > > > hpc.xxx.xxx.xxx.xxx cost 185 weight 0 > > > Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of > > > hosts that support NFS4 over TCP > > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > > mkdir_path /data/nfs4_test > > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > > mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test > > > /data/nfs4_test > > > Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve > > > server hpc.xxx.xxx.xxx.xxx: Name or service not known > > > > but then mount.nfs4(8) wasn't able to resolve it. Certainly it looks > > like the mount command is OK. > > > > Steve, heard anything about name resolution problems with mount.nfs4(8)? > > Any other thoughts on why this might be happening? > > > > Ian > > Just to give more info. > > 1. SELinux is disabled > 2. Mount works when I manually enter > mount -t nfs4 hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS5_Test /mnt > 3. Mount works when I have the following in the automount nisMapEntry > -fstype=nfs4,quota,hard,nobrowse mm.mm.mm.mm:/ifs/data/NFS4_Test > 4. Mount does not work only when I have the hostname (and only with an > Isilon SmartConnect zone, I think, because hostname resolution works when I > automount off NetApp) in the nisMapEntry > -fstype=nfs4,quota,hard,nobrowse hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test > > mm.mm.mm.mm above is one of the IPs in the IP address pool. > > Thanks. Wondering if there are any updates with this. Thanks. (In reply to Ian Kent from comment #4) > (In reply to Prakash Velayutham from comment #3) > > (In reply to Ian Kent from comment #2) > > > Please send a full debug log by setting LOGGING="debug" in the > > > autofs configuration. > > > > > > Make sure that syslog is recording facility daemon level debug > > > and greater to ensure we capture the log information. > > > > Not sure if this is what you are looking for. I have sanitized. > > xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the > > pool from the SmartConnect zone. > > Yep, that's what I need. > > snip ... > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host > > hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40 > > Assuming the nn.nn.nn.nn is the correct address, it seems strange > that autofs was able to resolve the host name ... > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time: > > 0.000186 > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host > > hpc.xxx.xxx.xxx.xxx cost 185 weight 0 > > Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of > > hosts that support NFS4 over TCP > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > mkdir_path /data/nfs4_test > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test > > /data/nfs4_test > > Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve > > server hpc.xxx.xxx.xxx.xxx: Name or service not known > > but then mount.nfs4(8) wasn't able to resolve it. Certainly it looks > like the mount command is OK. > > Steve, heard anything about name resolution problems with mount.nfs4(8)? > Any other thoughts on why this might be happening? > > Ian Hi Ian, Any updates? Wondering if I am the only one seeing this issue. Thanks, Prakash (In reply to Prakash Velayutham from comment #7) > (In reply to Ian Kent from comment #4) > > (In reply to Prakash Velayutham from comment #3) > > > (In reply to Ian Kent from comment #2) > > > > Please send a full debug log by setting LOGGING="debug" in the > > > > autofs configuration. > > > > > > > > Make sure that syslog is recording facility daemon level debug > > > > and greater to ensure we capture the log information. > > > > > > Not sure if this is what you are looking for. I have sanitized. > > > xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the > > > pool from the SmartConnect zone. > > > > Yep, that's what I need. > > > > snip ... > > > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host > > > hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40 > > > > Assuming the nn.nn.nn.nn is the correct address, it seems strange > > that autofs was able to resolve the host name ... > > > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time: > > > 0.000186 > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host > > > hpc.xxx.xxx.xxx.xxx cost 185 weight 0 > > > Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of > > > hosts that support NFS4 over TCP > > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > > mkdir_path /data/nfs4_test > > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > > mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test > > > /data/nfs4_test > > > Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve > > > server hpc.xxx.xxx.xxx.xxx: Name or service not known > > > > but then mount.nfs4(8) wasn't able to resolve it. Certainly it looks > > like the mount command is OK. > > > > Steve, heard anything about name resolution problems with mount.nfs4(8)? > > Any other thoughts on why this might be happening? > > > > Ian > > Hi Ian, > > Any updates? Wondering if I am the only one seeing this issue. TBH I don't know were to look since the name resolution appears to be working in autofs and the mount command generated by autofs also looks OK. Ian (In reply to Ian Kent from comment #8) > (In reply to Prakash Velayutham from comment #7) > > (In reply to Ian Kent from comment #4) > > > (In reply to Prakash Velayutham from comment #3) > > > > (In reply to Ian Kent from comment #2) > > > > > Please send a full debug log by setting LOGGING="debug" in the > > > > > autofs configuration. > > > > > > > > > > Make sure that syslog is recording facility daemon level debug > > > > > and greater to ensure we capture the log information. > > > > > > > > Not sure if this is what you are looking for. I have sanitized. > > > > xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the > > > > pool from the SmartConnect zone. > > > > > > Yep, that's what I need. > > > > > > snip ... > > > > > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host > > > > hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40 > > > > > > Assuming the nn.nn.nn.nn is the correct address, it seems strange > > > that autofs was able to resolve the host name ... > > > > > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time: > > > > 0.000186 > > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host > > > > hpc.xxx.xxx.xxx.xxx cost 185 weight 0 > > > > Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of > > > > hosts that support NFS4 over TCP > > > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > > > mkdir_path /data/nfs4_test > > > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > > > mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test > > > > /data/nfs4_test > > > > Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve > > > > server hpc.xxx.xxx.xxx.xxx: Name or service not known > > > > > > but then mount.nfs4(8) wasn't able to resolve it. Certainly it looks > > > like the mount command is OK. > > > > > > Steve, heard anything about name resolution problems with mount.nfs4(8)? > > > Any other thoughts on why this might be happening? > > > > > > Ian > > > > Hi Ian, > > > > Any updates? Wondering if I am the only one seeing this issue. > > TBH I don't know were to look since the name resolution appears to > be working in autofs and the mount command generated by autofs also > looks OK. > > Ian Is there any other debugging method/tool I can use to get more data that will help you shed some light on this. Currently, I am down to using one node in the Isilon cluster and that is not efficient at all. Thanks, Prakash This request was not resolved in time for the current release. Red Hat invites you to ask your support representative to propose this request, if still desired, for consideration in the next release of Red Hat Enterprise Linux. (In reply to Ian Kent from comment #4) > (In reply to Prakash Velayutham from comment #3) > > (In reply to Ian Kent from comment #2) > > > Please send a full debug log by setting LOGGING="debug" in the > > > autofs configuration. > > > > > > Make sure that syslog is recording facility daemon level debug > > > and greater to ensure we capture the log information. > > > > Not sure if this is what you are looking for. I have sanitized. > > xxx.xxx.xxx.xxx is a delegated zone and nn.nn.nn.nn is an IP address in the > > pool from the SmartConnect zone. > > Yep, that's what I need. > > snip ... > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: called with host > > hpc.xxx.xxx.xxx.xxx(nn.nn.nn.nn) proto 6 version 0x40 > > Assuming the nn.nn.nn.nn is the correct address, it seems strange > that autofs was able to resolve the host name ... Does 'host nn.nn.nn.nn' resolve? > > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: nfs v4 rpc ping time: > > 0.000186 > > Aug 26 22:05:04 node1 automount[42397]: get_nfs_info: host > > hpc.xxx.xxx.xxx.xxx cost 185 weight 0 > > Aug 26 22:05:04 node1 automount[42397]: prune_host_list: selected subset of > > hosts that support NFS4 over TCP > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > mkdir_path /data/nfs4_test > > Aug 26 22:05:04 node1 automount[42397]: mount_mount: mount(nfs): calling > > mount -t nfs4 -s -o quota,hard hpc.xxx.xxx.xxx.xxx:/ifs/data/NFS4_Test > > /data/nfs4_test > > Aug 26 22:05:04 node1 automount[42397]: >> mount.nfs4: Failed to resolve > > server hpc.xxx.xxx.xxx.xxx: Name or service not known > > but then mount.nfs4(8) wasn't able to resolve it. Certainly it looks > like the mount command is OK. > > Steve, heard anything about name resolution problems with mount.nfs4(8)? No not that I'm aware of... > Any other thoughts on why this might be happening? I think its definitely a DNS since mount uses the getaddrbyXXX() routes to resolve to resolve address... Maybe a network trace of the failure might help? Red Hat Enterprise Linux 6 is in the Production 3 Phase. During the Production 3 Phase, Critical impact Security Advisories (RHSAs) and selected Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available. The official life cycle policy can be reviewed here: http://redhat.com/rhel/lifecycle This issue does not meet the inclusion criteria for the Production 3 Phase and will be marked as CLOSED/WONTFIX. If this remains a critical requirement, please contact Red Hat Customer Support to request a re-evaluation of the issue, citing a clear business justification. Note that a strong business justification will be required for re-evaluation. Red Hat Customer Support can be contacted via the Red Hat Customer Portal at the following URL: https://access.redhat.com/ |