Bug 1060714
| Summary: | chmod -R of deep directory raises NullPointerException | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Martin Kudlej <mkudlej> |
| Component: | rhs-hadoop | Assignee: | Bradley Childs <bchilds> |
| Status: | CLOSED NOTABUG | QA Contact: | Martin Kudlej <mkudlej> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | low | ||
| Version: | unspecified | CC: | aavati, eboyd, esammons, matt, nlevinki, rhs-bugs, shtripat, vbellur |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-05-26 18:16:44 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Per Apr-02 bug triage meeting, granting both devel and pm acks Martin, the chmod in your problem description isn't correct, Could you supply the proper command? su test1 -c "hadoop fs -chmod -R 000 $(find ./d1/ -name d500)" returns: find: `./d1/': No such file or directory If I run this: hadoop fs -chmod -R 755 d1 it completes ok after a very long time. If I go to the FUSE mount of the gluster volume and try to change a directory to mode 000 I get this: [mapred@vm-1 mapred]$ chown -R 000 result/ chown: changing ownership of `result/_partition.lst': Operation not permitted chown: changing ownership of `result/_SUCCESS': Operation not permitted chown: changing ownership of `result/part-r-00000': Operation not permitted chown: changing ownership of `result/': Operation not permitted All of this makes me believe that 000 isn't a valid mode or there is a bug with gluster+FUSE. Either way changing a directory to 000 would have to be fixed with gluster+FUSE before it will work in Hadoop. |
Description of problem: I've created directory which is 500 dirs deep: $ d="$(seq -s "/d" 1 500)" $ su test1 -c "hadoop fs -mkdir -p d$d" Then I've tried to change their permissions: $ su test1 -c "hadoop fs -chmod -R 000 $(find ./d1/ -name d500)" and I've got this exception: 14/02/03 09:22:07 INFO glusterfs.GlusterVolume: Initializing gluster volume.. 14/02/03 09:22:07 INFO glusterfs.GlusterFileSystem: Configuring GlusterFS 14/02/03 09:22:07 INFO glusterfs.GlusterFileSystem: Initializing GlusterFS, CRC disabled. 14/02/03 09:22:07 INFO glusterfs.GlusterFileSystem: GIT INFO={git.commit.id.abbrev=51e5108, git.commit.user.email=bchilds, git.commit.message.full=2.1.5 branch/build , git.commit.id=51e5108fbec0b50d921aeb00ba2489bbdbe3d6ff, git.commit.message.short=2.1.5 branch/build, git.commit.user.name=childsb, git.build.user.name=Unknown, git.commit.id.describe=2.1.4-21-g51e5108, git.build.user.email=Unknown, git.branch=master, git.commit.time=17.01.2014 @ 16:05:54 EST, git.build.time=21.01.2014 @ 02:19:28 EST} 14/02/03 09:22:07 INFO glusterfs.GlusterFileSystem: GIT_TAG=2.1.4 14/02/03 09:22:07 INFO glusterfs.GlusterFileSystem: Configuring GlusterFS 14/02/03 09:22:07 INFO glusterfs.GlusterVolume: Initializing gluster volume.. 14/02/03 09:22:07 INFO glusterfs.GlusterVolume: Root of Gluster file system is /mnt/glusterfs 14/02/03 09:22:07 INFO glusterfs.GlusterVolume: mapreduce/superuser daemon : yarn 14/02/03 09:22:07 INFO glusterfs.GlusterVolume: Working directory is : glusterfs:/user/test1 14/02/03 09:22:07 INFO glusterfs.GlusterVolume: Write buffer size : 131072 -chmod: Fatal internal error java.lang.NullPointerException at org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:269) at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:347) at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:308) at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278) at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260) at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244) at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190) at org.apache.hadoop.fs.shell.Command.run(Command.java:154) at org.apache.hadoop.fs.FsShell.run(FsShell.java:255) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) at org.apache.hadoop.fs.FsShell.main(FsShell.java:305) Inotify log does not help. Version-Release number of selected component (if applicable): How reproducible: 100% with octal premissions 00X and 111 and maybe another else 0% with octal permissions 777 Actual results: It raises NullPointerException. Expected results: Recursive permission set will work without exception. Additional info: $ cat /var/log/hadoop-yarn/yarn/yarn-yarn-resourcemanager-dhcp-lab-124.englab.brq.redhat.com.out ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 14880 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 32768 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited