Same with volume start [root@client14 ~]# /opt/anush/gitrdma/sbin/gluster volume start test3 Starting volume test3 has been unsuccessful [root@client14 ~]# ps ax | grep glus 18231 ? Ssl 0:00 /opt/anush/gitrdma/sbin/glusterd 18306 ? SLsl 0:00 /opt/anush/gitrdma/sbin/glusterfs --xlator-option test3-server.listen-port=6971 -s localhost --volfile-id test3.10.1.10.34.share-anush-export3- -p /etc/glusterd/vols/test3/run/10.1.10.34-share-anush-export3-.pid --brick-name /share/anush/export3/ --brick-port 6971 -l /etc/glusterd/logs/share-anush-export3-.log 18310 ? SLsl 0:00 /opt/anush/gitrdma/sbin/glusterfs -f /etc/glusterd/nfs/nfs-server.vol -p /etc/glusterd/nfs/run/nfs.pid
[root@client14 ~]# /opt/anush/gitrdma/sbin/gluster volume create test3 transport rdma 10.1.10.31:/share/anush/export1/ 10.1.10.32:/share/anush/export2/ 10.1.10.34:/share/anush/export3/ 10.1.10.35:/share/anush/export4/ Creation of volume test3 has been unsuccessful Creating Volume test3 failed [root@client14 ~]# /opt/anush/gitrdma/sbin/gluster volume info Volume Name: test3 Type: None Status: Created Number of Bricks: 4 Bricks: Brick1: 10.1.10.31:/share/anush/export1/ Brick2: 10.1.10.32:/share/anush/export2/ Brick3: 10.1.10.34:/share/anush/export3/ Brick4: 10.1.10.35:/share/anush/export4
Anush pointed out that one of the backends was 100% full, which led to these failure messages.
Reopen the bug, and please save the logs. Also check the /etc/glusterd dir for the associated files of the volume for error.