| Summary: | Index out of bound exception while doing 'teravalidate' in dist-stripe-replicate volume | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | M S Vishwanath Bhat <vbhat> |
| Component: | HDFS | Assignee: | Venky Shankar <vshankar> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | pre-release | CC: | gluster-bugs, mzywusko |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | --- | |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | 3.3.0qa9 | Category: | --- |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
This should be resolved with the read_child > 0 fix that went in for afr. It's fixed now. I am able to run teravalidate fine. |
When I tried to run teravalidate mapreduce application after doing terasort, I received the "Index Out of Bound Exception". I have pasted the backtrace below. root@ubuntu1:/home/hadoop/hadoop-0.20.2# ./bin/hadoop jar hadoop-0.20.2-examples.jar teravalidate sort-out sort-report Initializing GlusterFS 11/08/12 03:13:46 INFO mapred.FileInputFormat: Total input paths to process : 1 java.lang.IndexOutOfBoundsException: Index: 1, Size: 1 at java.util.ArrayList.rangeCheck(ArrayList.java:571) at java.util.ArrayList.get(ArrayList.java:349) at org.apache.hadoop.fs.glusterfs.GlusterFSXattr.getHints(GlusterFSXattr.java:348) at org.apache.hadoop.fs.glusterfs.GlusterFSXattr.getPathInfo(GlusterFSXattr.java:80) at org.apache.hadoop.fs.glusterfs.GlusterFileSystem.getFileBlockLocations(GlusterFileSystem.java:452) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:222) at org.apache.hadoop.examples.terasort.TeraInputFormat.getSplits(TeraInputFormat.java:209) at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:810) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:781) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:730) at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1249) at org.apache.hadoop.examples.terasort.TeraValidate.run(TeraValidate.java:145) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.examples.terasort.TeraValidate.main(TeraValidate.java:153) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68) at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.hadoop.util.RunJar.main(RunJar.java:156) root@ubuntu1:/home/hadoop/hadoop-0.20.2#