Bug 1296674

Summary: rhel6-nfs.rhts.eng.bos.redhat.com/nfsvers=3_udp/special hangs mustang
Product: Red Hat Enterprise Linux 7 Reporter: Bill Peck <bpeck>
Component: kernel-aarch64Assignee: nfs-maint
kernel-aarch64 sub component: NFS QA Contact: Filesystem QE <fs-qe>
Status: CLOSED NOTABUG Docs Contact:
Severity: unspecified    
Priority: unspecified CC: steved
Version: 7.3   
Target Milestone: rc   
Target Release: ---   
Hardware: aarch64   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-01-13 15:16:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Bill Peck 2016-01-07 19:48:13 UTC
Description of problem:
[ 6518.303674] nfs: server rhel6-nfs.rhts.eng.bos.redhat.com not responding, still trying 
[ 6518.312088] nfs: server rhel6-nfs.rhts.eng.bos.redhat.com OK 
[ 6540.808424] list_add corruption. next->prev should be prev (fffffe015caeb170), but was fffffe00ba50d208. (next=fffffe015caeb170). 

Version-Release number of selected component (if applicable):
  kernel-4.4.0-0.rc5.22.el7


How reproducible:
not every time

Actual results:

[ 6540.820046] ------------[ cut here ]------------ 
[ 6540.824639] WARNING: at lib/list_debug.c:29 
[ 6540.828798] Modules linked in: rpcsec_gss_krb5 nfnetlink_queue nfnetlink_log nfnetlink bluetooth rfkill binfmt_misc tun ext4 mbcache jbd2 loop nls_koi8_u nls_cp932 ts_kmp nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack vfat fat sg xgene_rng gpio_xgene_sb gpio_generic nfsd xfs libcrc32c sdhci_acpi sdhci mmc_core ahci_xgene libahci_platform dm_mirror dm_region_hash dm_log dm_mod realtek(E) [last unloaded: zlib] 
[ 6540.865085]  
[ 6540.866568] CPU: 7 PID: 15487 Comm: cc1 Tainted: G            E   4.4.0-0.rc5.22.el7.aarch64 #1 
[ 6540.875221] Hardware name: AppliedMicro Mustang/Mustang, BIOS 1.1.0 Oct 20 2015 
[ 6540.882492] task: fffffe003017a280 ti: fffffe00b85dc000 task.ti: fffffe00b85dc000 
[ 6540.889939] PC is at __list_add+0x74/0xb8 
[ 6540.893926] LR is at __list_add+0x74/0xb8 
[ 6540.897913] pc : [<fffffe00003bd450>] lr : [<fffffe00003bd450>] pstate: 60000145 
[ 6540.905268] sp : fffffe00b85dfb00 
[ 6540.908564] x29: fffffe00b85dfb00 x28: fffffe00bc0001a0  
[ 6540.913866] x27: 0000000000000000 x26: fffffe0168010240  
[ 6540.919167] x25: fffffe00b85dfd50 x24: fffffe015619c3c0  
[ 6540.924469] x23: fffffe000024abd4 x22: fffffe00b9da7200  
[ 6540.929771] x21: fffffe015caeb170 x20: fffffe015caeb170  
[ 6540.935074] x19: fffffe00bbdb1d08 x18: 000003ffd0b4fcf0  
[ 6540.940376] x17: 000003ff8e29d028 x16: fffffe0000226768  
[ 6540.945679] x15: 0000000000000004 x14: 3037316265616335  
[ 6540.950981] x13: 3130656666666666 x12: 2820766572702065  
[ 6540.956284] x11: 6220646c756f6873 x10: 20766572703e2d74  
[ 6540.961586] x9 : 0000000000002ec9 x8 : 6265616335313065  
[ 6540.966887] x7 : fffffe0001373fc0 x6 : fffffe0001373c34  
[ 6540.972190] x5 : 0000000000000000 x4 : 0000000000000000  
[ 6540.977495] x3 : 0000000000000000 x2 : fffffe01fffa2458  
[ 6540.982798] x1 : 0000000000000001 x0 : 0000000000000075  
[ 6540.988099]  
[ 6540.989579] ---[ end trace 25c802646676d247 ]--- 
[ 6540.994171] Call Trace: 
[ 6540.996604] [<fffffe00003bd450>] __list_add+0x74/0xb8 
[ 6541.001629] [<fffffe0000289e54>] proc_reg_open+0xc8/0x130 
[ 6541.006999] [<fffffe0000224d7c>] do_dentry_open+0x1e8/0x300 
[ 6541.012542] [<fffffe0000226254>] vfs_open+0x6c/0x7c 
[ 6541.017394] [<fffffe00002340c4>] do_last+0x140/0xc3c 
[ 6541.022331] [<fffffe0000234c3c>] path_openat+0x7c/0x2c0 
[ 6541.027527] [<fffffe00002364b8>] do_filp_open+0x74/0xd0 
[ 6541.032724] [<fffffe0000226644>] do_sys_open+0x14c/0x228 
[ 6541.038007] [<fffffe00002267a4>] SyS_openat+0x3c/0x48 
[ 6541.043032] [<fffffe0000091a0c>] __sys_trace_return+0x0/0x4 
[ 6541.048592] list_add corruption. prev->next should be next (fffffe015caeb170), but was fffffe00ba50d208. (prev=fffffe015caeb170). 

Additional info:
beaker links in follow up comment

Comment 3 Steve Dickson 2016-01-13 15:16:21 UTC
(In reply to Bill Peck from comment #1)
> failing jobs:
> https://beaker.engineering.redhat.com/jobs/1185089
> https://beaker.engineering.redhat.com/recipes/2401654
It appears these failures happen because the watchdog timer
timed out. 

> 
> link to console logs:
> http://beaker-archive.app.eng.bos.redhat.com/beaker-logs/2016/01/11848/
> 1184858/2401654/console.log
> http://beaker-archive.app.eng.bos.redhat.com/beaker-logs/2016/01/11850/
> 1185089/2402238/console.log
These log show that there was newtwork problems is the 
RHEL6 NFS server. 

It is a well known fact that using UDP on a noisy network is 
not a good thing. That's why TCP is the default since its
handles those types of networks much better

Assuming this is not constantly reproducible, I'm going
to close this as notabug since this is a know issue 
with the UDP transport.

Comment 4 Bill Peck 2016-01-13 15:19:37 UTC
Hi Steve,

The system attempted to reboot first (thats the localwatchdog) but was unable to and that's why the external watchdog kicked in.

Even with using udp shouldn't the system be able to reboot?

Comment 5 Steve Dickson 2016-05-09 11:46:52 UTC
yes... unless there is some type of hanging mount