Description of problem: Currently it is daemon who is responsible for creating the log directory as part of its init job. But in case someone just installed gluster-block for the first time and has not started the daemon yet, but trying to run the cli command, since cli logs few failures in the log directory and in this case cli finds that log dir doesn't exit and it bails out.
Tested and verified this on the build glusterfs-3.8.4-31 and gluster-block-0.2.1-3. In case the daemon is not started (and hence log directory of gluster-block not present), any gluster-block command when error'ing out will print stderr on the terminal. Moving this to verified in 3.3. Logs are pasted below. [root@dhcp47-115 ~]# gluster-block create ozone/testblock auth enable ha 3 10.70.47.121,10.70.47.113,10.70.47.114 Error opening log file: No such file or directory Logging to stderr. [2017-06-28 10:45:48.297613] ERROR: Connection failed. Please check if gluster-block daemon is operational. [at gluster-block.c+159 :<glusterBlockCliRPC_1>] Connection failed. Please check if gluster-block daemon is operational. Error opening log file: No such file or directory Logging to stderr. [2017-06-28 10:45:48.298458] ERROR: failed creating block testblock on volume ozone with hosts ha [at gluster-block.c+421 :<glusterBlockCreate>] Error opening log file: No such file or directory Logging to stderr. [2017-06-28 10:45:48.298490] ERROR: failed in create [at gluster-block.c+540 :<glusterBlockParseArgs>] [root@dhcp47-115 ~]# [root@dhcp47-115 ~]# systemctl gluster-blockd Unknown operation 'gluster-blockd'. [root@dhcp47-115 ~]# systemctl status gluster-blockd ● gluster-blockd.service - Gluster block storage utility Loaded: loaded (/usr/lib/systemd/system/gluster-blockd.service; disabled; vendor preset: disabled) Active: inactive (dead) [root@dhcp47-115 ~]# [root@dhcp47-115 ~]# rpm -qa | grep gluster-blockd [root@dhcp47-115 ~]# rpm -qa | grep gluster-block gluster-block-0.2.1-3.el7rhgs.x86_64 [root@dhcp47-115 ~]# rpm -qa | grep glusterfs glusterfs-cli-3.8.4-31.el7rhgs.x86_64 glusterfs-libs-3.8.4-31.el7rhgs.x86_64 glusterfs-events-3.8.4-31.el7rhgs.x86_64 glusterfs-api-3.8.4-31.el7rhgs.x86_64 samba-vfs-glusterfs-4.6.3-3.el7rhgs.x86_64 glusterfs-client-xlators-3.8.4-31.el7rhgs.x86_64 glusterfs-server-3.8.4-31.el7rhgs.x86_64 glusterfs-rdma-3.8.4-31.el7rhgs.x86_64 glusterfs-debuginfo-3.8.4-26.el7rhgs.x86_64 glusterfs-3.8.4-31.el7rhgs.x86_64 glusterfs-geo-replication-3.8.4-31.el7rhgs.x86_64 glusterfs-fuse-3.8.4-31.el7rhgs.x86_64 [root@dhcp47-115 ~]# [root@dhcp47-115 ~]# gluster pool list UUID Hostname State 49610061-1788-4cbc-9205-0e59fe91d842 dhcp47-121.lab.eng.blr.redhat.com Connected a0557927-4e5e-4ff7-8dce-94873f867707 dhcp47-113.lab.eng.blr.redhat.com Connected c0dac197-5a4d-4db7-b709-dbf8b8eb0896 dhcp47-114.lab.eng.blr.redhat.com Connected a96e0244-b5ce-4518-895c-8eb453c71ded dhcp47-116.lab.eng.blr.redhat.com Connected 17eb3cef-17e7-4249-954b-fc19ec608304 dhcp47-117.lab.eng.blr.redhat.com Connected f828fdfa-e08f-4d12-85d8-2121cafcf9d0 localhost Connected [root@dhcp47-115 ~]# [root@dhcp47-115 ~]# gluster v info ozone Volume Name: ozone Type: Distributed-Replicate Volume ID: ee279838-502f-4c1f-8ae3-de68c9c64089 Status: Started Snapshot Count: 0 Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: 10.70.47.121:/bricks/brick3/ozone0 Brick2: 10.70.47.113:/bricks/brick3/ozone1 Brick3: 10.70.47.114:/bricks/brick3/ozone2 Brick4: 10.70.47.115:/bricks/brick3/ozone3 Brick5: 10.70.47.116:/bricks/brick3/ozone4 Brick6: 10.70.47.117:/bricks/brick3/ozone5 Options Reconfigured: transport.address-family: inet nfs.disable: on cluster.brick-multiplex: disable cluster.enable-shared-storage: enable [root@dhcp47-115 ~]#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:2773