Bug 1121116 - [Quota] 2 types of "quota exceeded" error messages
Summary: [Quota] 2 types of "quota exceeded" error messages
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhs-hadoop
Version: rhgs-3.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Bradley Childs
QA Contact: BigData QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-18 12:08 UTC by Martin Kudlej
Modified: 2016-02-01 16:23 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-02-01 16:23:19 UTC
Embargoed:


Attachments (Terms of Use)

Description Martin Kudlej 2014-07-18 12:08:58 UTC
Description of problem:
There is quota X MB set on directory. I've tried to copy do that directory file with size X MB/2.
First copying was OK as I've expected.
Second one ends with this error:
$ hadoop fs -put ./100 /quota/dir5/100.10
14/07/18 13:56:28 INFO glusterfs.GlusterVolume: Initializing gluster volume..
14/07/18 13:56:28 INFO glusterfs.GlusterFileSystem: Configuring GlusterFS
14/07/18 13:56:28 INFO glusterfs.GlusterFileSystem: Initializing GlusterFS,  CRC disabled.
14/07/18 13:56:28 INFO glusterfs.GlusterFileSystem: GIT INFO={git.commit.id.abbrev=bace8a2, git.commit.user.email=bchilds.rdu2.redhat.com, git.commit.message.full=[update RPM spec file/changelog] - 2.3.2
, git.commit.id=bace8a2d0269353c9ad293c8b2509ae145b56888, git.commit.message.short=[update RPM spec file/changelog] - 2.3.2, git.commit.user.name=Brad Childs, git.build.user.name=Unknown, git.commit.id.describe=2.3.10-9-gbace8a2, git.build.user.email=Unknown, git.branch=2.3.2, git.commit.time=10.07.2014 @ 09:59:23 EDT, git.build.time=10.07.2014 @ 10:10:47 EDT}
14/07/18 13:56:28 INFO glusterfs.GlusterFileSystem: GIT_TAG=2.3.10
14/07/18 13:56:28 INFO glusterfs.GlusterFileSystem: Configuring GlusterFS
14/07/18 13:56:29 INFO glusterfs.GlusterVolume: Initializing gluster volume..
14/07/18 13:56:29 INFO glusterfs.GlusterVolume: Gluster volume: HadoopVol1 at : /mnt/glusterfs/HadoopVol1
14/07/18 13:56:29 INFO glusterfs.GlusterVolume: Working directory is : glusterfs:/user/bigtop
14/07/18 13:56:29 INFO glusterfs.GlusterVolume: Write buffer size : 131072
14/07/18 13:56:29 INFO glusterfs.GlusterVolume: Default block size : 67108864
>>>> put: /mnt/glusterfs/HadoopVol1/quota/dir5/100.10 (Disk quota exceeded)

File is partially copied to disk.

If I try it again:

$ hadoop fs -put ./100 /quota/dir5/100.11
14/07/18 13:56:41 INFO glusterfs.GlusterVolume: Initializing gluster volume..
14/07/18 13:56:41 INFO glusterfs.GlusterFileSystem: Configuring GlusterFS
14/07/18 13:56:41 INFO glusterfs.GlusterFileSystem: Initializing GlusterFS,  CRC disabled.
14/07/18 13:56:41 INFO glusterfs.GlusterFileSystem: GIT INFO={git.commit.id.abbrev=bace8a2, git.commit.user.email=bchilds.rdu2.redhat.com, git.commit.message.full=[update RPM spec file/changelog] - 2.3.2
, git.commit.id=bace8a2d0269353c9ad293c8b2509ae145b56888, git.commit.message.short=[update RPM spec file/changelog] - 2.3.2, git.commit.user.name=Brad Childs, git.build.user.name=Unknown, git.commit.id.describe=2.3.10-9-gbace8a2, git.build.user.email=Unknown, git.branch=2.3.2, git.commit.time=10.07.2014 @ 09:59:23 EDT, git.build.time=10.07.2014 @ 10:10:47 EDT}
14/07/18 13:56:41 INFO glusterfs.GlusterFileSystem: GIT_TAG=2.3.10
14/07/18 13:56:41 INFO glusterfs.GlusterFileSystem: Configuring GlusterFS
14/07/18 13:56:41 INFO glusterfs.GlusterVolume: Initializing gluster volume..
14/07/18 13:56:41 INFO glusterfs.GlusterVolume: Gluster volume: HadoopVol1 at : /mnt/glusterfs/HadoopVol1
14/07/18 13:56:41 INFO glusterfs.GlusterVolume: Working directory is : glusterfs:/user/bigtop
14/07/18 13:56:41 INFO glusterfs.GlusterVolume: Write buffer size : 131072
14/07/18 13:56:41 INFO glusterfs.GlusterVolume: Default block size : 67108864
Exception in thread "main" org.apache.hadoop.fs.FSError: java.io.IOException: Disk quota exceeded
        at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:238)
        at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
        at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:59)
        at java.io.DataOutputStream.write(DataOutputStream.java:90)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:80)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:52)
        at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:112)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:366)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:338)
        at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)
        at org.apache.hadoop.fs.glusterfs.GlusterVolume.rename(GlusterVolume.java:251)
        at org.apache.hadoop.fs.FilterFileSystem.rename(FilterFileSystem.java:210)
        at org.apache.hadoop.fs.FilterFileSystem.rename(FilterFileSystem.java:210)
        at org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.rename(CommandWithDestination.java:322)
        at org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:282)  
        at org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:245)
        at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:188)
        at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:173)
        at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:306)
        at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278)
        at org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:168) 
        at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260)
        at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)
        at org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:145)
        at org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:229)
        at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190)
        at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
        at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
        at org.apache.hadoop.fs.FsShell.main(FsShell.java:305)
Caused by: java.io.IOException: Disk quota exceeded
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:282)
        at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.write(RawLocalFileSystem.java:236)
        ... 30 more

There is correct exception that file doesn't fit to disk because of quota.

I think the first error message should be the same(so exception which can be caught by application).


Version-Release number of selected component (if applicable):
glusterfs-3.6.0.24-1.el6rhs.x86_64
glusterfs-api-3.6.0.24-1.el6rhs.x86_64
glusterfs-cli-3.6.0.24-1.el6rhs.x86_64
glusterfs-fuse-3.6.0.24-1.el6rhs.x86_64
glusterfs-geo-replication-3.6.0.24-1.el6rhs.x86_64
glusterfs-libs-3.6.0.24-1.el6rhs.x86_64
glusterfs-rdma-3.6.0.24-1.el6rhs.x86_64
glusterfs-server-3.6.0.24-1.el6rhs.x86_64
gluster-nagios-addons-0.1.9-1.el6rhs.x86_64
gluster-nagios-common-0.1.3-2.el6rhs.noarch
hadoop-2.2.0.2.0.6.0-101.el6.x86_64
hadoop-client-2.2.0.2.0.6.0-101.el6.x86_64
hadoop-hdfs-2.2.0.2.0.6.0-101.el6.x86_64
hadoop-libhdfs-2.2.0.2.0.6.0-101.el6.x86_64
hadoop-lzo-0.5.0-1.x86_64
hadoop-lzo-native-0.5.0-1.x86_64
hadoop-mapreduce-2.2.0.2.0.6.0-101.el6.x86_64
hadoop-mapreduce-historyserver-2.2.0.2.0.6.0-101.el6.x86_64
hadoop-yarn-2.2.0.2.0.6.0-101.el6.x86_64
hadoop-yarn-nodemanager-2.2.0.2.0.6.0-101.el6.x86_64
hadoop-yarn-resourcemanager-2.2.0.2.0.6.0-101.el6.x86_64
rhs-hadoop-2.3.2-5.el6rhs.noarch
rhs-hadoop-install-1_27-1.el6rhs.noarch
samba-glusterfs-3.6.9-168.4.el6rhs.x86_64
vdsm-gluster-4.14.7.2-1.el6rhs.noarch

How reproducible:
100%

Actual results and Expected results:
There are 2 types of error messages because of quota exceeded. I think both should be unified to one and it should be exception as in second case.

Comment 2 Jakub Rumanovsky 2014-10-13 12:57:34 UTC
I found another type of exception while testing quota with bigtop.
 
------------------
Steps to reproduce
------------------
1. Set quota on direcotry dir1, then create dir2 inside directory dir1 and set quota on this dir also. 

2. When the quota is exceeded in both directory or subdirectory, the mapred job ends with this exception (see below).

------------------
Expected results
------------------
Because it is not clear from the Exception trace, that this is a problem with quota and there are already two other types of exceptions possible, I would like to suggest unification of Exceptions to one (maybe a child of IOException).


==========================MAPRED JOB STACK TRACE============================

14/10/13 12:59:48 INFO glusterfs.GlusterVolume: Initializing gluster volume..
    14/10/13 12:59:48 INFO glusterfs.GlusterFileSystem: Configuring GlusterFS
    14/10/13 12:59:48 INFO glusterfs.GlusterFileSystem: Initializing GlusterFS,  CRC disabled.
    14/10/13 12:59:48 INFO glusterfs.GlusterFileSystem: GIT INFO={git.commit.id.abbrev=acca0e1, git.commit.user.email=bchilds.rdu2.redhat.com, git.commit.message.full=[update RPM spec file/changelog] - 2.3.3
    , git.commit.id=acca0e11de5f49a67474af70f6758d4182852760, git.commit.message.short=[update RPM spec file/changelog] - 2.3.3, git.commit.user.name=Brad Childs, git.build.user.name=Unknown, git.commit.id.describe=2.3.10-12-gacca0e1, git.build.user.email=Unknown, git.branch=2.3.3, git.commit.time=29.07.2014 @ 10:52:06 EDT, git.build.time=29.07.2014 @ 11:06:36 EDT}
    14/10/13 12:59:48 INFO glusterfs.GlusterFileSystem: GIT_TAG=2.3.10
    14/10/13 12:59:48 INFO glusterfs.GlusterFileSystem: Configuring GlusterFS
    14/10/13 12:59:48 INFO glusterfs.GlusterVolume: Initializing gluster volume..
    14/10/13 12:59:48 INFO glusterfs.GlusterVolume: Gluster volume: HadoopVol1 at : /mnt/glusterfs/HadoopVol1
    14/10/13 12:59:48 INFO glusterfs.GlusterVolume: Gluster volume: gv0 at : /mnt/gv0
    14/10/13 12:59:48 INFO glusterfs.GlusterVolume: Working directory is : glusterfs:/user/jerry
    14/10/13 12:59:48 INFO glusterfs.GlusterVolume: Write buffer size : 131072
    14/10/13 12:59:48 INFO glusterfs.GlusterVolume: Default block size : 67108864
    14/10/13 12:59:50 INFO impl.TimelineClientImpl: Timeline service address: http://master-hdp21:8188/ws/v1/timeline/
    14/10/13 12:59:50 INFO client.RMProxy: Connecting to ResourceManager at master-hdp21/192.168.122.190:8050
    14/10/13 12:59:50 INFO glusterfs.GlusterVolume: Initializing gluster volume..
    14/10/13 12:59:50 INFO glusterfs.GlusterVolume: Initializing gluster volume..
    14/10/13 12:59:50 INFO glusterfs.GlusterVolume: Gluster volume: HadoopVol1 at : /mnt/glusterfs/HadoopVol1
    14/10/13 12:59:50 INFO glusterfs.GlusterVolume: Gluster volume: gv0 at : /mnt/gv0
    14/10/13 12:59:50 INFO glusterfs.GlusterVolume: Working directory is : glusterfs:/user/jerry
    14/10/13 12:59:50 INFO glusterfs.GlusterVolume: Write buffer size : 131072
    14/10/13 12:59:50 INFO glusterfs.GlusterVolume: Default block size : 67108864
    14/10/13 12:59:51 INFO terasort.TeraSort: Generating 1000000 using 2
    14/10/13 12:59:52 INFO mapreduce.JobSubmitter: number of splits:2
    14/10/13 12:59:52 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1413189162518_0013
    14/10/13 12:59:53 INFO impl.YarnClientImpl: Submitted application application_1413189162518_0013
    14/10/13 12:59:53 INFO mapreduce.Job: The url to track the job: http://master-hdp21:8088/proxy/application_1413189162518_0013/
    14/10/13 12:59:53 INFO mapreduce.Job: Running job: job_1413189162518_0013
    14/10/13 13:00:04 INFO mapreduce.Job: Job job_1413189162518_0013 running in uber mode : false
    14/10/13 13:00:04 INFO mapreduce.Job:  map 0% reduce 0%
    14/10/13 13:00:17 INFO mapreduce.Job:  map 50% reduce 0%
    14/10/13 13:00:17 INFO mapreduce.Job: Task Id : attempt_1413189162518_0013_m_000001_0, Status : FAILED
    Error: java.io.IOException: Mkdirs failed to create glusterfs:/user/jerry/dir1/in2/_temporary/1/_temporary/attempt_1413189162518_0013_m_000001_0
            at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:263)
            at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:252)
            at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:286)
            at org.apache.hadoop.fs.FilterFileSystem.create(FilterFileSystem.java:178)
            at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
            at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
            at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:784)
            at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:773)
            at org.apache.hadoop.examples.terasort.TeraOutputFormat.getRecordWriter(TeraOutputFormat.java:99)
            at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:624)
            at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:744)
            at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
            at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:415)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
            at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
     
    14/10/13 13:00:23 INFO mapreduce.Job: Task Id : attempt_1413189162518_0013_m_000001_1, Status : FAILED
    Error: java.io.IOException: Mkdirs failed to create glusterfs:/user/jerry/dir1/in2/_temporary/1/_temporary/attempt_1413189162518_0013_m_000001_1
            at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:263)
            at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:252)
            at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:286)
            at org.apache.hadoop.fs.FilterFileSystem.create(FilterFileSystem.java:178)
            at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
            at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
            at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:784)
            at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:773)
            at org.apache.hadoop.examples.terasort.TeraOutputFormat.getRecordWriter(TeraOutputFormat.java:99)
            at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:624)
            at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:744)
            at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
            at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:415)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
            at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
     
    14/10/13 13:00:29 INFO mapreduce.Job: Task Id : attempt_1413189162518_0013_m_000001_2, Status : FAILED
    Error: java.io.IOException: Mkdirs failed to create glusterfs:/user/jerry/dir1/in2/_temporary/1/_temporary/attempt_1413189162518_0013_m_000001_2
            at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:263)
            at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:252)
            at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:286)
            at org.apache.hadoop.fs.FilterFileSystem.create(FilterFileSystem.java:178)
            at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
            at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
            at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:784)
            at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:773)
            at org.apache.hadoop.examples.terasort.TeraOutputFormat.getRecordWriter(TeraOutputFormat.java:99)
            at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:624)
            at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:744)
            at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
            at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
            at java.security.AccessController.doPrivileged(Native Method)
            at javax.security.auth.Subject.doAs(Subject.java:415)
            at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
            at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
     
    14/10/13 13:00:36 INFO mapreduce.Job:  map 100% reduce 0%
    14/10/13 13:00:37 INFO mapreduce.Job: Job job_1413189162518_0013 failed with state FAILED due to: Task failed task_1413189162518_0013_m_000001
    Job failed as tasks failed. failedMaps:1 failedReduces:0
     
    14/10/13 13:00:37 INFO mapreduce.Job: Counters: 32
            File System Counters
                    FILE: Number of bytes read=0
                    FILE: Number of bytes written=97803
                    FILE: Number of read operations=0
                    FILE: Number of large read operations=0
                    FILE: Number of write operations=0
                    GLUSTERFS: Number of bytes read=50000167
                    GLUSTERFS: Number of bytes written=100000000
                    GLUSTERFS: Number of read operations=0
                    GLUSTERFS: Number of large read operations=0
                    GLUSTERFS: Number of write operations=0
            Job Counters
                    Failed map tasks=4
                    Launched map tasks=5
                    Other local map tasks=5
                    Total time spent by all maps in occupied slots (ms)=34145
                    Total time spent by all reduces in occupied slots (ms)=0
                    Total time spent by all map tasks (ms)=34145
                    Total vcore-seconds taken by all map tasks=34145
                    Total megabyte-seconds taken by all map tasks=23286890
            Map-Reduce Framework
                    Map input records=500000
                    Map output records=500000
                    Input split bytes=82
                    Spilled Records=0
                    Failed Shuffles=0
                    Merged Map outputs=0
                    GC time elapsed (ms)=51
                    CPU time spent (ms)=1330
                    Physical memory (bytes) snapshot=99323904
                    Virtual memory (bytes) snapshot=1204695040
                    Total committed heap usage (bytes)=32636928
            org.apache.hadoop.examples.terasort.TeraGen$Counters
                    CHECKSUM=1074598070305752
            File Input Format Counters
                    Bytes Read=0
            File Output Format Counters
                    Bytes Written=50000000

Comment 3 Steve Watt 2016-02-01 16:23:19 UTC
This solution is no longer available from Red Hat.


Note You need to log in before you can comment on or make changes to this bug.