Bug 1219048
Summary: | Data Tiering:Enabling quota command fails with "quota command failed : Commit failed on localhost" | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Joseph Elwin Fernandes <josferna> |
Component: | tiering | Assignee: | bugs <bugs> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | bugs <bugs> |
Severity: | urgent | Docs Contact: | |
Priority: | urgent | ||
Version: | 3.7.0 | CC: | annair, bugs, dlambrig, nchilaka, sankarshan |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | 1214219 | Environment: | |
Last Closed: | 2015-05-14 17:27:33 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1214219 | ||
Bug Blocks: | 1186580, 1199352, 1214666, 1229259, 1260923 |
Description
Joseph Elwin Fernandes
2015-05-06 13:31:26 UTC
REVIEW: http://review.gluster.org/10611 (glusterd/quota/tiering: Fixing volgen of quotad) posted (#1) for review on release-3.7 by Joseph Fernandes (josferna) REVIEW: http://review.gluster.org/10611 (glusterd/quota/tiering: Fixing volgen of quotad) posted (#2) for review on release-3.7 by Joseph Fernandes (josferna) Reproduced this ont the BETA2 build too, hence moving it to ASSIGNED. I tested this with glusterfs-3.7.0beta2-0.2.gitc1cd4fa.autobuild for fedora 21 updated on 13th May 2015 http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs-3.7/fedora-21-x86_64/glusterfs-3.7.0beta2-0.2.gitc1cd4fa.autobuild/ And it works! [root@rhs-srv-09 glusterfs-3.7.0beta2-0.2.gitc1cd4fa.autobuild]# gluster v quota test enable volume quota : success Please find the vol info: [root@rhs-srv-09 glusterfs-3.7.0beta2-0.2.gitc1cd4fa.autobuild]# gluster volume info Volume Name: test Type: Tier Volume ID: a64cdd30-aaaa-4692-8cb2-2f94659a4d13 Status: Started Number of Bricks: 8 Transport-type: tcp Hot Tier : Hot Tier Type : Distributed-Replicate Number of Bricks: 2 x 2 = 4 Brick1: rhs-srv-08:/home/ssd/s2 Brick2: rhs-srv-09:/home/ssd/s2 Brick3: rhs-srv-08:/home/ssd/s1 Brick4: rhs-srv-09:/home/ssd/s1 Cold Bricks: Cold Tier Type : Distributed-Replicate Number of Bricks: 2 x 2 = 4 Brick5: rhs-srv-09:/home/disk/d1 Brick6: rhs-srv-08:/home/disk/d1 Brick7: rhs-srv-09:/home/disk/d2 Brick8: rhs-srv-08:/home/disk/d2 Options Reconfigured: features.inode-quota: on features.quota: on cluster.read-freq-threshold: 4 cluster.write-freq-threshold: 4 features.record-counters: on performance.io-cache: off performance.quick-read: off cluster.tier-promote-frequency: 180 cluster.tier-demote-frequency: 180 features.ctr-enabled: on performance.readdir-ahead: on Please note that the quotad is running. [root@rhs-srv-09 glusterfs-3.7.0beta2-0.2.gitc1cd4fa.autobuild]# ps -ef | grep gluster root 24472 1 0 May13 ? 00:00:02 /usr/sbin/glusterd -p /var/run/glusterd.pid root 24682 1 0 00:24 ? 00:00:01 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f44ab838f2da2718654348889bbe6dfb.socket --xlator-option *replicate*.node-uuid=316a021d-44b9-4fc0-b454-5b3c68a927f8 root 25168 1 2 00:30 ? 00:00:02 /usr/sbin/glusterfs -s localhost --volfile-id test -l /var/log/glusterfs/quota-mount-test.log -p /var/run/gluster/test.pid --client-pid -5 /var/run/gluster/test/ root 25181 1 2 00:30 ? 00:00:02 /usr/sbin/glusterfs -s localhost --volfile-id gluster/quotad -p /var/lib/glusterd/quotad/run/quotad.pid -l /var/log/glusterfs/quotad.log -S /var/run/gluster/e8a9003c9022266961a6f2768b238291.socket --xlator-option *replicate*.data-self-heal=off --xlator-option *replicate*.metadata-self-heal=off --xlator-option *replicate*.entry-self-heal=off root 25236 24055 0 00:31 pts/0 00:00:00 grep --color=auto gluster [root@rhs-srv-09 glusterfs-3.7.0beta2-0.2.gitc1cd4fa.autobuild]# which build did you use to reproduce the issue? Moving back the issue to QA as its fixed in the latest build. This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user |