| Summary: | sorting job fails with "IndexOutOfBoundException" | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | M S Vishwanath Bhat <vbhat> |
| Component: | HDFS | Assignee: | Venky Shankar <vshankar> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | M S Vishwanath Bhat <vbhat> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | pre-release | CC: | gluster-bugs, mzywusko, vijay |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-3.4.0 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2013-07-24 17:57:23 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Bug Depends On: | |||
| Bug Blocks: | 817967 | ||
This should be fixed with current master. Fixed now... [root@QA-23 hadoop-0.20.2]# ./bin/hadoop jar hadoop-0.20.2-examples.jar sort rtext sort-out Initializing GlusterFS Running on 4 nodes to sort from /mnt/glusterfs/rtext into /mnt/glusterfs/sort-out with 7 reduces. Job started: Fri Jun 08 06:00:49 EDT 2012 12/06/08 06:00:49 INFO mapred.FileInputFormat: Total input paths to process : 0 12/06/08 06:00:49 INFO mapred.JobClient: Running job: job_201206080542_0006 12/06/08 06:00:50 INFO mapred.JobClient: map 0% reduce 0% 12/06/08 06:01:13 INFO mapred.JobClient: map 0% reduce 28% 12/06/08 06:01:14 INFO mapred.JobClient: map 0% reduce 42% 12/06/08 06:01:16 INFO mapred.JobClient: map 0% reduce 57% 12/06/08 06:01:17 INFO mapred.JobClient: map 0% reduce 100% 12/06/08 06:01:22 INFO mapred.JobClient: Job complete: job_201206080542_0006 12/06/08 06:01:22 INFO mapred.JobClient: Counters: 8 12/06/08 06:01:22 INFO mapred.JobClient: Job Counters 12/06/08 06:01:22 INFO mapred.JobClient: Launched reduce tasks=7 12/06/08 06:01:22 INFO mapred.JobClient: Map-Reduce Framework 12/06/08 06:01:22 INFO mapred.JobClient: Reduce input groups=0 12/06/08 06:01:22 INFO mapred.JobClient: Combine output records=0 12/06/08 06:01:22 INFO mapred.JobClient: Reduce shuffle bytes=0 12/06/08 06:01:22 INFO mapred.JobClient: Reduce output records=0 12/06/08 06:01:22 INFO mapred.JobClient: Spilled Records=0 12/06/08 06:01:22 INFO mapred.JobClient: Combine input records=0 12/06/08 06:01:22 INFO mapred.JobClient: Reduce input records=0 Job completed: Fri Jun 08 06:01:22 EDT 2012 The job took 33 seconds. |
Setup is 2*2*2 Distributed-striped-replicate. generated data for sorting using the randomwriter. The issue happens with both quick-slave-io ON and OFF. I have pasted the backtrace below. attempt_201109301517_0001_m_000439_1: Initializing GlusterFS 11/09/30 15:35:07 INFO mapred.JobClient: Task Id : attempt_201109301517_0001_m_000429_1, Status : FAILED java.lang.IndexOutOfBoundsException at java.io.DataInputStream.readFully(DataInputStream.java:192) at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63) at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2062) at org.apache.hadoop.mapred.SequenceFileRecordReader.next(SequenceFileRecordReader.java:76) at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:192) at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:176) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307) at org.apache.hadoop.mapred.Child.main(Child.java:170) attempt_201109301517_0001_m_000429_1: Initializing GlusterFS 11/09/30 15:35:11 INFO mapred.JobClient: Job complete: job_201109301517_0001 11/09/30 15:35:11 INFO mapred.JobClient: Counters: 13 11/09/30 15:35:11 INFO mapred.JobClient: Job Counters 11/09/30 15:35:11 INFO mapred.JobClient: Rack-local map tasks=11 11/09/30 15:35:11 INFO mapred.JobClient: Launched map tasks=1846 11/09/30 15:35:11 INFO mapred.JobClient: Data-local map tasks=1835 11/09/30 15:35:11 INFO mapred.JobClient: Failed map tasks=1 11/09/30 15:35:11 INFO mapred.JobClient: FileSystemCounters 11/09/30 15:35:11 INFO mapred.JobClient: FILE_BYTES_READ=134014624 11/09/30 15:35:11 INFO mapred.JobClient: FILE_BYTES_WRITTEN=3817988662 11/09/30 15:35:11 INFO mapred.JobClient: Map-Reduce Framework 11/09/30 15:35:11 INFO mapred.JobClient: Combine output records=0 11/09/30 15:35:11 INFO mapred.JobClient: Map input records=350477 11/09/30 15:35:11 INFO mapred.JobClient: Spilled Records=363193 11/09/30 15:35:11 INFO mapred.JobClient: Map output bytes=3682046402 11/09/30 15:35:11 INFO mapred.JobClient: Map input bytes=3691366698 11/09/30 15:35:11 INFO mapred.JobClient: Combine input records=0 11/09/30 15:35:11 INFO mapred.JobClient: Map output records=350477 java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1252) at org.apache.hadoop.examples.Sort.run(Sort.java:176) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.examples.Sort.main(Sort.java:187) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68) at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) root@ubuntu1:/home/hadoop/hadoop-0.20.2#