Bug 762269 (GLUSTER-537) - infinite loop in inode_path ()
Summary: infinite loop in inode_path ()
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: GLUSTER-537
Product: GlusterFS
Classification: Community
Component: core
Version: 3.0.0
Hardware: All
OS: Linux
low
low
Target Milestone: ---
Assignee: Anand Avati
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-01-12 20:41 UTC by Amar Tumballi
Modified: 2015-12-01 16:45 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Amar Tumballi 2010-01-12 20:41:46 UTC
-- log on client --

[2009-12-18 09:16:45] W [fuse-bridge.c:491:fuse_entry_cbk] glusterfs-fuse: LOOKUP(/mulligan_SNP_CC/nova_cc_wrapper/5.normal.1_slide_LMP_1_slide_PE/single_tag_experiment/PE.2_and_3.trimmed/bpv/chr17) inode (ptr=0x2aaab000e5b0, ino=1, gen=0) found conflict (ptr=0x1d48e960, ino=1, gen=0)
[2009-12-18 09:16:45] C [inode.c:883:inode_path] inode: possible infinite loop detected, forcing break. name=(chr17)
[2009-12-18 09:16:45] W [fuse-bridge.c:585:fuse_lookup] glusterfs-fuse: 2897048: LOOKUP 46912518708848/chr17 (fuse_loc_fill() failed)
[2009-12-18 09:16:45] C [inode.c:883:inode_path] inode: possible infinite loop detected, forcing break. name=((null))

After that, everything is infinite loop.


--- dump of client --- (for inode 1)

[xlator.mount.fuse.itable.active.1]
xlator.mount.fuse.itable.active.1.nlookup=0
xlator.mount.fuse.itable.active.1.generation=0
xlator.mount.fuse.itable.active.1.ref=4
xlator.mount.fuse.itable.active.1.ino=1
xlator.mount.fuse.itable.active.1.st_mode=0
xlator.protocol.client.client09.inode.1.par=382304840
xlator.protocol.client.client12.inode.1.par=776750663
xlator.protocol.client.client10.inode.1.par=596181456
xlator.protocol.client.client13.inode.1.par=347783632
xlator.protocol.client.client11.inode.1.par=698977143
xlator.protocol.client.client16.inode.1.par=50036851
xlator.protocol.client.client05.inode.1.par=343606310
xlator.protocol.client.client14.inode.1.par=183648836
xlator.protocol.client.client15.inode.1.par=632472135
xlator.protocol.client.client06.inode.1.par=809452176
xlator.protocol.client.client01.inode.1.par=692453924
xlator.protocol.client.client07.inode.1.par=187876115
xlator.protocol.client.client03.inode.1.par=640254621
xlator.protocol.client.client08.inode.1.par=172999305
xlator.cluster.dht.distribute.inode.1.cnt=16
xlator.cluster.dht.distribute.inode.1.preset=0
xlator.cluster.dht.distribute.inode.1.gen=33
xlator.cluster.dht.distribute.inode.1.type=0
xlator.cluster.dht.distribute.inode.1.list[0].err=2
xlator.cluster.dht.distribute.inode.1.list[0].start=2147483646
xlator.cluster.dht.distribute.inode.1.list[0].stop=2454267023
xlator.cluster.dht.distribute.inode.1.list[0].xlator.type=protocol/client
xlator.cluster.dht.distribute.inode.1.list[0].xlator.name=client01
xlator.cluster.dht.distribute.inode.1.list[1].err=2
xlator.cluster.dht.distribute.inode.1.list[1].start=0
xlator.cluster.dht.distribute.inode.1.list[1].stop=0
xlator.cluster.dht.distribute.inode.1.list[1].xlator.type=protocol/client
xlator.cluster.dht.distribute.inode.1.list[1].xlator.name=client02
xlator.cluster.dht.distribute.inode.1.list[2].err=0
xlator.cluster.dht.distribute.inode.1.list[2].start=2454267024
xlator.cluster.dht.distribute.inode.1.list[2].stop=2761050401
xlator.cluster.dht.distribute.inode.1.list[2].xlator.type=protocol/client
xlator.cluster.dht.distribute.inode.1.list[2].xlator.name=client03
xlator.cluster.dht.distribute.inode.1.list[3].err=2
xlator.cluster.dht.distribute.inode.1.list[3].start=0
xlator.cluster.dht.distribute.inode.1.list[3].stop=0
xlator.cluster.dht.distribute.inode.1.list[3].xlator.type=protocol/client
xlator.cluster.dht.distribute.inode.1.list[3].xlator.name=client04
xlator.cluster.dht.distribute.inode.1.list[4].err=2
xlator.cluster.dht.distribute.inode.1.list[4].start=2761050402
xlator.cluster.dht.distribute.inode.1.list[4].stop=3067833779
xlator.cluster.dht.distribute.inode.1.list[4].xlator.type=protocol/client
xlator.cluster.dht.distribute.inode.1.list[4].xlator.name=client05
xlator.cluster.dht.distribute.inode.1.list[5].err=0
xlator.cluster.dht.distribute.inode.1.list[5].start=3067833780
xlator.cluster.dht.distribute.inode.1.list[5].stop=3374617157
xlator.cluster.dht.distribute.inode.1.list[5].xlator.type=protocol/client
xlator.cluster.dht.distribute.inode.1.list[5].xlator.name=client06
xlator.cluster.dht.distribute.inode.1.list[6].err=2
xlator.cluster.dht.distribute.inode.1.list[6].start=3374617158
xlator.cluster.dht.distribute.inode.1.list[6].stop=3681400535
xlator.cluster.dht.distribute.inode.1.list[6].xlator.type=protocol/client
xlator.cluster.dht.distribute.inode.1.list[6].xlator.name=client07
xlator.cluster.dht.distribute.inode.1.list[7].err=2
xlator.cluster.dht.distribute.inode.1.list[7].start=3681400536
xlator.cluster.dht.distribute.inode.1.list[7].stop=3988183913
xlator.cluster.dht.distribute.inode.1.list[7].xlator.type=protocol/client
xlator.cluster.dht.distribute.inode.1.list[7].xlator.name=client08
xlator.cluster.dht.distribute.inode.1.list[8].err=2
xlator.cluster.dht.distribute.inode.1.list[8].start=3988183914
xlator.cluster.dht.distribute.inode.1.list[8].stop=4294967295
xlator.cluster.dht.distribute.inode.1.list[8].xlator.type=protocol/client
xlator.cluster.dht.distribute.inode.1.list[8].xlator.name=client09
xlator.cluster.dht.distribute.inode.1.list[9].err=2
xlator.cluster.dht.distribute.inode.1.list[9].start=0
xlator.cluster.dht.distribute.inode.1.list[9].stop=306783377
xlator.cluster.dht.distribute.inode.1.list[9].xlator.type=protocol/client
xlator.cluster.dht.distribute.inode.1.list[9].xlator.name=client10
xlator.cluster.dht.distribute.inode.1.list[10].err=2
xlator.cluster.dht.distribute.inode.1.list[10].start=306783378
xlator.cluster.dht.distribute.inode.1.list[10].stop=613566755
xlator.cluster.dht.distribute.inode.1.list[10].xlator.type=protocol/client
xlator.cluster.dht.distribute.inode.1.list[10].xlator.name=client11
xlator.cluster.dht.distribute.inode.1.list[11].err=0
xlator.cluster.dht.distribute.inode.1.list[11].start=613566756
xlator.cluster.dht.distribute.inode.1.list[11].stop=920350133
xlator.cluster.dht.distribute.inode.1.list[11].xlator.type=protocol/client
xlator.cluster.dht.distribute.inode.1.list[11].xlator.name=client12
xlator.cluster.dht.distribute.inode.1.list[12].err=2
xlator.cluster.dht.distribute.inode.1.list[12].start=920350134
xlator.cluster.dht.distribute.inode.1.list[12].stop=1227133511
xlator.cluster.dht.distribute.inode.1.list[12].xlator.type=protocol/client
xlator.cluster.dht.distribute.inode.1.list[12].xlator.name=client13
xlator.cluster.dht.distribute.inode.1.list[13].err=2
xlator.cluster.dht.distribute.inode.1.list[13].start=1227133512
xlator.cluster.dht.distribute.inode.1.list[13].stop=1533916889
xlator.cluster.dht.distribute.inode.1.list[13].xlator.type=protocol/client
xlator.cluster.dht.distribute.inode.1.list[13].xlator.name=client14
xlator.cluster.dht.distribute.inode.1.list[14].err=2
xlator.cluster.dht.distribute.inode.1.list[14].start=1533916890
xlator.cluster.dht.distribute.inode.1.list[14].stop=1840700267
xlator.cluster.dht.distribute.inode.1.list[14].xlator.type=protocol/client
xlator.cluster.dht.distribute.inode.1.list[14].xlator.name=client15
xlator.cluster.dht.distribute.inode.1.list[15].err=2
xlator.cluster.dht.distribute.inode.1.list[15].start=1840700268
xlator.cluster.dht.distribute.inode.1.list[15].stop=2147483645
xlator.cluster.dht.distribute.inode.1.list[15].xlator.type=protocol/client
xlator.cluster.dht.distribute.inode.1.list[15].xlator.name=client16

---------------------

Looks like inode for '/', we have some other path.. there is no corresponding logs in servers..

---

Comment 1 Anand Avati 2010-02-22 07:29:30 UTC
PATCH: http://patches.gluster.com/patch/2792 in master (inode: guard against possible infinite loops)

Comment 2 Anand Avati 2010-02-22 07:29:37 UTC
PATCH: http://patches.gluster.com/patch/2792 in release-3.0 (inode: guard against possible infinite loops)


Note You need to log in before you can comment on or make changes to this bug.