Created attachment 896780 [details] reproducer Description of problem: The documentation for glfs_init() states: /* SYNOPSIS glfs_init: Initialize the 'virtual mount' DESCRIPTION This function initializes the glfs_t object. This consists of many steps: - Spawn a poll-loop thread. - Establish connection to management daemon and receive volume specification. - Construct translator graph and initialize graph. - Wait for initialization (connecting to all bricks) to complete. PARAMETERS @fs: The 'virtual mount' object to be initialized. RETURN VALUES 0 : Success. -1 : Failure. @errno will be set with the type of failure. */ but the fucntion may return 1 and don't set errno in the case the brick can't be accessed due to DNS resolution problem. Version-Release number of selected component (if applicable): 3.5.0 ga, 3.4.0 and possibly others How reproducible: 100% Steps to Reproduce: 1. have a gluster volume with a brick declared using a hostname: $ gluster --remote-host=192.168.123.2 volume info Volume Name: gv0 Type: Distribute Volume ID: 1784c1e7-45ed-448e-815b-84c668971ce0 Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: gluster-node-1:/mnt/gluster-brick Options Reconfigured: auth.allow: * nfs.rpc-auth-allow: * server.allow-insecure: on 2. make sure that "gluster-node-1" can't be resolved from remote host $ host gluster-node-1 Host gluster-node-1 not found: 3(NXDOMAIN) 3. try to initialize volume using IP address gluster = glfs_new("gv0); glfs_set_volfile_server(gluster, "tcp", "192.168.123.2", 0); Actual results: glfs_init returns 1 and doesn't set errno Expected results: -1 returned and errno set to the failure cause OR docs updated to state the return value Additional info: See attached reproducer: $ gcc -lgfapi -o glfs_test glfs_test.c $ ./glfs_test 192.168.123.2 gv0 glfs_init() returned 1 and didn't set errno Aborted (core dumped) Expected output: $ ./glfs_test 192.168.123.2 gv0 glfs_init() returned -1 set errno: Name or service not known Aborted (core dumped)
This flaw caused problems with qemu's backend for gluster access: $ qemu-img info gluster://192.168.123.2/gv0/img7 Segmentation fault (core dumped) qemu's gluster backend was patched to avoid the issue.
This issue is fixed with recent releases of Gluster. Hence closing this. [root@deepthought Downloads]# ./a.out foo foo2 glfs_init() returned -1 set errno: Transport endpoint is not connected [root@deepthought Downloads]# ./a.out localhost foo2 glfs_init was successful! [root@deepthought Downloads]# gluster volume status foo2 Status of volume: foo2 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick deepthought:/data/bricks/nov1 49152 0 Y 29473