Bug 1003626 - java.lang.NullPointerException is raised during execution of multifilewc mapreduce job
Summary: java.lang.NullPointerException is raised during execution of multifilewc mapr...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: gluster-hadoop
Version: mainline
Hardware: x86_64
OS: Linux
high
unspecified
Target Milestone: ---
Assignee: Bradley Childs
QA Contact: Martin Kudlej
URL:
Whiteboard:
Depends On:
Blocks: 1005838
TreeView+ depends on / blocked
 
Reported: 2013-09-02 14:03 UTC by Martin Bukatovic
Modified: 2014-04-23 20:16 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1005838 (view as bug list)
Environment:
Last Closed: 2014-03-03 16:27:29 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Martin Bukatovic 2013-09-02 14:03:00 UTC
Description of problem
======================

When running multifilewc job from hadoop-examples jar file, completion of the
job fails because of NullPointerException.

Version-Release number of selected component (if applicable)
============================================================

glusterfs shim 2.1

hortonworks hadoop distribution (HDP 1.3):

~~~
rpm -qa | grep hadoop
hadoop-pipes-1.2.0.1.3.0.0-107.el6.x86_64
hadoop-1.2.0.1.3.0.0-107.el6.x86_64
hadoop-libhdfs-1.2.0.1.3.0.0-107.el6.x86_64
hadoop-native-1.2.0.1.3.0.0-107.el6.x86_64
hadoop-lzo-0.5.0-1.x86_64
hadoop-lzo-native-0.5.0-1.x86_64
hadoop-sbin-1.2.0.1.3.0.0-107.el6.x86_64
~~~

How reproducible
================

always

Steps to Reproduce
==================

1. configure hadoop cluster to use glusterfs using shim 2.1
2. copy example text files into directory /mnt/hadoop/examples/text
   (where /mnt/hadoop is glusterfs mountpoint)
   eg. wget http://www.gutenberg.org/ebooks/{76,135,17405}.txt.utf-8
3. run multifilewc from hadoop examples jar:

~~~
hadoop jar /usr/lib/hadoop/hadoop-examples.jar multifilewc examples/text examples-output/multifilewc
~~~

Actual results
==============

Job fails because of java.lang.NullPointerException:

~~~
$ hadoop jar /usr/lib/hadoop/hadoop-examples.jar multifilewc examples/text examples-output/multifilewc
13/09/02 15:51:40 INFO glusterfs.GlusterFileSystemCRC: Initializing gluster volume..
13/09/02 15:51:40 INFO glusterfs.GlusterFileSystem: Configuring GlusterFS
13/09/02 15:51:40 INFO glusterfs.GlusterFileSystem: Initializing GlusterFS,  CRC disabled.
13/09/02 15:51:40 INFO glusterfs.GlusterFileSystem: Configuring GlusterFS
13/09/02 15:51:40 INFO glusterfs.GlusterFileSystemCRC: Initializing gluster volume..
13/09/02 15:51:41 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/09/02 15:51:42 INFO mapred.FileInputFormat: Total input paths to process : 4
13/09/02 15:51:43 INFO mapred.JobClient: Running job: job_201309021551_0001
13/09/02 15:51:44 INFO mapred.JobClient:  map 0% reduce 0%
13/09/02 15:51:55 INFO mapred.JobClient: Task Id : attempt_201309021551_0001_m_000000_0, Status : FAILED
java.lang.NullPointerException
        at org.apache.hadoop.fs.BufferedFSInputStream.getPos(BufferedFSInputStream.java:50)
        at org.apache.hadoop.fs.FSDataInputStream.getPos(FSDataInputStream.java:41)
        at org.apache.hadoop.examples.MultiFileWordCount$MultiFileLineRecordReader.getPos(MultiFileWordCount.java:152)
        at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.getPos(MapTask.java:243)
        at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:230)
        at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:209)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:429)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:365)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
        at org.apache.hadoop.mapred.Child.main(Child.java:249)

<skipped the same exception few times>

13/09/02 15:52:16 INFO mapred.JobClient: Job complete: job_201309021551_0001
13/09/02 15:52:16 INFO mapred.JobClient: Counters: 7
13/09/02 15:52:16 INFO mapred.JobClient:   Job Counters 
13/09/02 15:52:16 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=26796
13/09/02 15:52:16 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
13/09/02 15:52:16 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
13/09/02 15:52:16 INFO mapred.JobClient:     Launched map tasks=4
13/09/02 15:52:16 INFO mapred.JobClient:     Data-local map tasks=4
13/09/02 15:52:16 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
13/09/02 15:52:16 INFO mapred.JobClient:     Failed map tasks=1
13/09/02 15:52:16 INFO mapred.JobClient: Job Failed: # of failed Map Tasks exceeded allowed limit. FailedCount: 1. La
stFailedTask: task_201309021551_0001_m_000000
java.io.IOException: Job failed!
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1357)
        at org.apache.hadoop.examples.MultiFileWordCount.run(MultiFileWordCount.java:258)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
        at org.apache.hadoop.examples.MultiFileWordCount.main(MultiFileWordCount.java:264)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
        at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
        at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
~~~


Expected results
================

Job completes successfuly.

Additional info
===============

Otherwise hadoop cluster works normally, wordcount job finishes without any
problems on the same data:

~~~
$ hadoop jar /usr/lib/hadoop/hadoop-examples.jar wordcount examples/text examples-output/wordcount
~~~

Comment 3 Jay Vyas 2013-09-06 19:12:17 UTC
As per conversastion w m bukatov, looks like bigtop downloads the input for you, so recreating the bug is easy. 

We better check into the two issues 

1) Does SystemTestsBT properly record failures for hadoop/ ? or does it glance over them. 

2) Is the getPos implementation in RawLocalFileSystem sufficient to support the operations in this MaprEduce job.

Comment 4 Jay Vyas 2013-09-07 15:16:45 UTC
confirmed that this reproduces in our hadoop-1.1.2.23 distribution as well (HDP 1.2) 

java.lang.NullPointerException
	at org.apache.hadoop.fs.BufferedFSInputStream.getPos(BufferedFSInputStream.java:48)
	at org.apache.hadoop.fs.FSDataInputStream.getPos(FSDataInputStream.java:41)
	at org.apache.hadoop.examples.MultiFileWordCount$MultiFileLineRecordReader.getPos(MultiFileWordCount.java:152)
	at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.getPos(MapTask.java:250)
	at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:237)
	at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:216)
	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
	at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1178)
	at org.apache.hadoop.mapred.Child.main(Child.java:249)


Thanks for reporting this - we will definetly dive into it. At first glance,  It appears at first glance that the custom implementation of the RecordReader in this mapreduce job can throw NullPointerExceptions in the case that a stream has been closed early or something.

Comment 6 Jay Vyas 2013-09-07 15:44:12 UTC
And also: This can be reproduced without any complex input .  just a simple 

echo "here are \n some words " >> /mnt/glusterfs/inputdir/input.txt

hadoop jar /home/install/hadoop-1.1.2.23//hadoop-examples-1.1.2.23.jar multifilewc glusterfs://inputdir/input.txt glusterfs:///output2

Is enough to reproduce this.  also note in the above invocation that there are fully qualified paths. 

That said: Ruling out other possibilities: 

The program terminates properly if the input files are empty/incorrect, so this bug is not due to improper URI handling. 

This is more evidence that indeed this is a bug that occurs inside a open filed stream, either after or during the reading of input from a file.

Comment 8 Jay Vyas 2013-09-09 23:42:10 UTC
Alas, another update.  clearly this bug is happening because the stream being read is closed.  By adding a debugging log which tries to print "available()" , we yeild the following, slightly more revealing stacktrace: 


2013-09-09 19:38:53,876 WARN org.apache.hadoop.mapred.Child: Error running child
java.io.IOException: Stream closed
	at java.io.BufferedInputStream.getInIfOpen(BufferedInputStream.java:134)
	at java.io.BufferedInputStream.available(BufferedInputStream.java:381)
	at java.io.FilterInputStream.available(FilterInputStream.java:142)
	at org.rhbd.others.MultiFileWordCount$MultiFileLineRecordReader.getPos(MultiFileWordCount.java:145)

^^ This tells us that clearly Some how the BufferedInputStream is attempting to read from a stream which is otherwise closed.

Comment 9 Jay Vyas 2013-09-10 15:24:47 UTC
And finally, the following amendment to the record reader

https://github.com/jayunit100/hadoop-example-jobs/commit/b695a516597a415943bad0f087bb126d387772db

Actually fixes this job!  

Thus, I might venture to say that this is actually a bug in the way that the Record Reader is implemented.

Further investigation will be required to see what the correct behaviour of getPos should be in the case that the underlying input stream is null.  

The current status is thus that we need to determine why it is that the underlying input stream in this record reader becomes null.

Comment 10 Jay Vyas 2013-09-10 19:53:39 UTC
Some more tests indicate that the LATEST version of multifilewc actually PASS (hadoop 2.0 trunk) , because they use the CombineFileInputFormat instead of the deprecated MultiFileInputFormat, which is older.  Thus i think we can close this bug. 

To confirm, you can analyze the difference between MultiFileWordCount and MultiFileWordCount2 in the source repo above: 

https://github.com/jayunit100/hadoop-example-jobs/tree/master/src/main/java/org/rhbd/others

You can also run these two jobs and note that one fails, the other succeeds: 

git clone 
https://github.com/jayunit100/hadoop-example-jobs/

mvn package

mkdir /mnt/glusterfs/wmultin
echo "some text " >> /mnt/glusterfs/wmultin/file1
echo "some text .." >> /mnt/glusterfs/wmultin/file1
echo "some more ...." >> /mnt/glusterfs/wmultin/file2
echo "some more ....." >> /mnt/glusterfs/wmultin/file2

#Passes
hadoop jar hadoop-examples-1.0-SNAPSHOT.jar org.rhbd.others.MultiFileWordCount2 /wmultiin/ /outputpass

#Fails
hadoop jar hadoop-examples-1.0-SNAPSHOT.jar org.rhbd.others.MultiFileWordCount /wmultiin/ /outputfails

Comment 11 Jay Vyas 2013-09-12 16:32:39 UTC
And to further add some color to this:  The offending multifilewc implementations appear to implement mapred.RecordReader instead of the newer mapreduce.RecordReader.  

The OLDER version of record reader seems to implement a getPos(), which "returns the current position of the input".  The newer record reader classes, in the mapreduce package, do not provide this functionality - which makes sense - since getPos() may not be relevant for some record types. 

And finally, the getPos is called simply for metadata purposes in the TrackedRecordReader (also in the older mapred API), where it records the before and after position when reading from a RecordReader (TrackedRecordReader actually WRAPS a record reader).

In any case, I am going to trace the code path to confirm wether or not there is a possible patch we can add into the shim.  At this point - its not clear that the behaviour of getPos() should be for closed streams.

Changing this ticket BACK to assigned until i hear back from someone in the hadoop community

Comment 12 Jay Vyas 2013-09-12 17:30:21 UTC
have heard back and, it appears that mapred.* is fully supported in 2.X .

I wil have to dive in and make sure our implementation of FSInputStream is correct - if *yes* then that will be essentially equivalent to confirming that the RecordReader in multifilewc has a bug.

Comment 13 Jay Vyas 2013-09-15 02:16:28 UTC
In any case, we now have a patch that fixes this: 

https://github.com/jayunit100/hadoop-glusterfs/compare/fix_inputstream_exception2?expand=1

The fix essentially wraps the underlying buffer so that, if an exception is thrown when getPos() is called, the last successfull result of getPos() is returned. 
This is enough to satisfy the corner case where the MapTasks.moveToNext implementation in the mapred package overuses/abuses calls to getPos(). 
Which will return null when the reader is in this intermediate state (that is, in between files).

An aside:  To see why this bug is only a consequence of the mapred* packages MapTask record readers, and NOT an issue in the newer, more streamlined mapreduce.* API, you can see the difference between how "BYTES_READ" is calculated. 

In mapreduce: The newer Tracking input stream classes, which are implemented in the RawLocalFileSystem track the # of bytes read internally in the FileSystem , and they simply modify a global Statistics object inside the FileSystem API - rather than having external RecordReaders coupled to the notion of a byte position.

In mapre: The RecordReader's described above which are used by the MapTask class EXPECTS that a getPos() functionality will be provided by the underling FSInputStream, and calls that getPos() multiple times to determine how many bytes have been read.  In a distributed context where a single input stream  is switching between file input streams during reads (i.e. the multifilewordcount), this causes the exception above.  

Why doesnt' this happen, then in HDFS?  It appears that because the DFSClient, which returns the input stream for the DistributedFileSystem, does not try to wrap the underlying stream in any sort of buffering : that is, the DFSInputStream actually does its OWN buffering, so a call to its "getPos()" doesnt result on a proxied call to a wrapped implementation which is set to null.  

To come full circle: This bug simply happens because the buffered wrapper sets the underlying stream, when it is done being read, to null - but the getPos functionality is needed to be retained a little after the stream is consumed because of the way that the MapTask uses custom RecordReaders in the old mapred.* API.   And we now have a patch that fixes this in the GlusterVolume implementation of the RawLocalFileSystem, by storing the "pos" field locally so that if the underlying call fails, a valid long representing the last position in the byte stream, can be returned.

Comment 15 Scott Haines 2013-11-19 19:54:14 UTC
Per 11/13 bug triage meeting, re-assigning to bchilds and mark high.

Comment 16 Jay Vyas 2013-11-19 20:08:05 UTC
Still outstanding patch, not sure what we should do about it ? https://github.com/gluster/glusterfs-hadoop/pull/59

Comment 18 Jay Vyas 2014-04-23 19:44:23 UTC
This is a hadoop 1-x bug, not 2x.  Is that the reason we closed it?  makes sense.


Note You need to log in before you can comment on or make changes to this bug.