Hide Forgot
Description of problem: ======================= Tried creating 1x3 volumes and below is the volume status output. LV creation and mounting works fine. [root@rhsqa1 ~]# gluster v status Status of volume: data Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick rhsqa1.lab.eng.blr.redhat.com:/rhgs/d ata/data N/A N/A N N/A Brick rhsqa13.lab.eng.blr.redhat.com:/rhgs/ data/data N/A N/A N N/A Brick rhsqa4.lab.eng.blr.redhat.com:/rhgs/d ata/data N/A N/A N N/A NFS Server on localhost N/A N/A N N/A Self-heal Daemon on localhost N/A N/A N N/A NFS Server on rhsqa4.lab.eng.blr.redhat.com N/A N/A N N/A Self-heal Daemon on rhsqa4.lab.eng.blr.redh at.com N/A N/A N N/A NFS Server on rhsqa13.lab.eng.blr.redhat.co m N/A N/A N N/A Self-heal Daemon on rhsqa13.lab.eng.blr.red hat.com N/A N/A N N/A Task Status of Volume data ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: engine_vol Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick rhsqa1.lab.eng.blr.redhat.com:/rhgs/e ngine/ev N/A N/A N N/A Brick rhsqa13.lab.eng.blr.redhat.com:/rhgs/ engine/ev N/A N/A N N/A Brick rhsqa4.lab.eng.blr.redhat.com:/rhgs/e ngine/ev N/A N/A N N/A NFS Server on localhost N/A N/A N N/A Self-heal Daemon on localhost N/A N/A N N/A NFS Server on rhsqa4.lab.eng.blr.redhat.com N/A N/A N N/A Self-heal Daemon on rhsqa4.lab.eng.blr.redh at.com N/A N/A N N/A NFS Server on rhsqa13.lab.eng.blr.redhat.co m N/A N/A N N/A Self-heal Daemon on rhsqa13.lab.eng.blr.red hat.com N/A N/A N N/A Task Status of Volume engine_vol ------------------------------------------------------------------------------ There are no active volume tasks Status of volume: vmstore Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick rhsqa1.lab.eng.blr.redhat.com:/rhgs/v mstore/vms N/A N/A Y 12215 Brick rhsqa13.lab.eng.blr.redhat.com:/rhgs/ vmstore/vms N/A N/A Y 16131 Brick rhsqa4.lab.eng.blr.redhat.com:/rhgs/v mstore/vms N/A N/A Y 18868 NFS Server on localhost N/A N/A N N/A Self-heal Daemon on localhost N/A N/A N N/A NFS Server on rhsqa4.lab.eng.blr.redhat.com N/A N/A N N/A Self-heal Daemon on rhsqa4.lab.eng.blr.redh at.com N/A N/A N N/A NFS Server on rhsqa13.lab.eng.blr.redhat.co m N/A N/A N N/A Self-heal Daemon on rhsqa13.lab.eng.blr.red hat.com N/A N/A N N/A Task Status of Volume vmstore ------------------------------------------------------------------------------ There are no active volume tasks volume creation conf file: ========================= [volume1] action=create volname=engine_vol transport=tcp,rdma replica=yes replica_count=3 force=yes key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm value=virt,36,36,on,512MB,32,full brick_dirs=/rhgs/engine/ev [volume2] action=create volname=vmstore transport=tcp,rdma replica=yes replica_count=3 force=yes key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm value=virt,36,36,on,512MB,32,full brick_dirs=/rhgs/vmstore/vms [volume3] action=create volname=data transport=tcp,rdma replica=yes replica_count=3 force=yes key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm value=virt,36,36,on,512MB,32,full brick_dirs=/rhgs/data/data Version-Release number of selected component (if applicable): ------------------------------------------------------------- gdeploy-2.0-8 How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Bhaskar this is due to a bug in gluster. In the config file you've written tcp,rdma for transport. Which causes the volume start to fail. volume start: foo: failed: Commit failed on localhost. Please check log file for details. For now remove rdma from transport till the gluster issue is root caused and fixed. Also, Kaushal suggested to install glusterfs-rdma package and check. Can you please close this bug?
I will check that. Closing this for now.