Bug 1002422 - Gluster volume info gives error : Connection failed. Please check if gluster daemon is operational. No volumes present [NEEDINFO]
Gluster volume info gives error : Connection failed. Please check if gluster ...
Status: CLOSED EOL
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
2.1
Unspecified Unspecified
unspecified Severity high
: ---
: ---
Assigned To: Atin Mukherjee
storage-qa-internal@redhat.com
glusterd
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-08-29 03:41 EDT by senaik
Modified: 2015-12-03 12:10 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-03 12:10:53 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
amukherj: needinfo? (senaik)


Attachments (Terms of Use)

  None (edit)
Description senaik 2013-08-29 03:41:15 EDT
Description of problem:
========================= 
Getting the following error intermittently while checking gluster volume info (when volumes are present and glusterd is running)

gluster v i 
Connection failed. Please check if gluster daemon is operational.
No volumes present

[root@boost brick1]# gluster v i 
 
Volume Name: Vol4
Type: Distributed-Replicate
Volume ID: 7e3c9c95-9854-4881-9769-def0f3b0572c
Status: Stopped
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.34.85:/rhs/brick1/a1
Brick2: 10.70.34.86:/rhs/brick1/a2
Brick3: 10.70.34.87:/rhs/brick1/a3
Brick4: 10.70.34.88:/rhs/brick1/a4
Brick5: 10.70.34.87:/rhs/brick1/a5
Brick6: 10.70.34.88:/rhs/brick1/a6
 
Volume Name: Volume3
Type: Distribute
Volume ID: 1a61d914-8c42-4b96-8eb7-368067ea1246
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.34.85:/rhs/brick1/e1
Brick2: 10.70.34.86:/rhs/brick1/e2
Brick3: 10.70.34.87:/rhs/brick1/e3
Brick4: 10.70.34.88:/rhs/brick1/e4
[root@boost brick1]# gluster v i 
 
Volume Name: Vol4
Type: Distributed-Replicate
Volume ID: 7e3c9c95-9854-4881-9769-def0f3b0572c
Status: Stopped
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.34.85:/rhs/brick1/a1
Brick2: 10.70.34.86:/rhs/brick1/a2
Brick3: 10.70.34.87:/rhs/brick1/a3
Brick4: 10.70.34.88:/rhs/brick1/a4
Brick5: 10.70.34.87:/rhs/brick1/a5
Brick6: 10.70.34.88:/rhs/brick1/a6
 
Volume Name: Volume3
Type: Distribute
Volume ID: 1a61d914-8c42-4b96-8eb7-368067ea1246
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 10.70.34.85:/rhs/brick1/e1
Brick2: 10.70.34.86:/rhs/brick1/e2
Brick3: 10.70.34.87:/rhs/brick1/e3
Brick4: 10.70.34.88:/rhs/brick1/e4
[root@boost brick1]# gluster v i 

Connection failed. Please check if gluster daemon is operational.
No volumes present
[root@boost brick1]# gluster v i 


Version-Release number of selected component (if applicable):
============================================================= 
gluster --version
glusterfs 3.4.0.24rhs built on Aug 27 2013 07:29:40


How reproducible:
================== 
Intermittently 


Steps to Reproduce:
==================== 

1.Executed few volume start,stop, delete, create operations and checked gluster volume info 
2.
3.

Actual results:
============== 
We get connection failed , no volumes present error message intermittently 


Expected results:
================== 
When volumes are present and glusterd is running , we should not get error message that 'No volumes are present'


Additional info:
================ 

------------------Part of .cmd log history--------------------------- 

[2013-08-28 10:53:52.115783]  : v stop Volume1 : SUCCESS
[2013-08-28 10:56:59.552101]  : v stop Volume1 : FAILED : Volume Volume1 is not in the started state
[2013-08-28 10:57:05.525179]  : v start Volume1 : FAILED : Failed to find brick directory /rhs/brick1/b1 for volume Volume1. Reason : No such file or directory
[2013-08-28 10:57:46.694533]  : v delete Volume1 : SUCCESS
[2013-08-28 10:58:18.199856]  : v create Volume3 10.70.34.85:/rhs/brick1/e1 10.70.34.86:/rhs/brick1/e2 10.70.34.87:/rhs/brick1/e3 10.70.34.88:/rhs/brick1/e4 : SUCCESS
[2013-08-28 11:05:01.810878]  : v status : SUCCESS
[2013-08-28 11:05:01.814722]  : v status : FAILED : Volume Vol4 is not started
[2013-08-28 11:05:01.821105]  : v status : SUCCESS
[2013-08-28 11:05:13.741734]  : v start Volume3 : FAILED : Volume Volume3 already started
[2013-08-28 11:05:25.707160]  : v stop Volume3 : SUCCESS
[2013-08-28 11:05:27.879506]  : v start Volume3 : SUCCESS
[2013-08-28 11:17:36.200388]  : volume status : SUCCESS
[2013-08-28 11:17:36.205098]  : volume status : FAILED : Volume Vol4 is not started
[2013-08-28 11:17:36.213799]  : volume status : SUCCESS



 

-----------------------Part of glusterd log---------------------------- 
[2013-08-28 11:22:27.465924] E [rpcsvc.c:519:rpcsvc_handle_rpc_call] 0-glusterd: Request received from non-privileged port. Failing request
[2013-08-28 11:22:27.470784] E [rpcsvc.c:519:rpcsvc_handle_rpc_call] 0-glusterd: Request received from non-privileged port. Failing request
[2013-08-28 11:22:27.499589] E [rpcsvc.c:519:rpcsvc_handle_rpc_call] 0-glusterd: Request received from non-privileged port. Failing request
[2013-08-28 11:22:28.063980] E [rpcsvc.c:519:rpcsvc_handle_rpc_call] 0-glusterd: Request received from non-privileged port. Failing request
[2013-08-28 11:22:28.068780] E [rpcsvc.c:519:rpcsvc_handle_rpc_call] 0-glusterd: Request received from non-privileged port. Failing request
Comment 3 Lalatendu Mohanty 2013-08-29 07:43:17 EDT
I have also seen this issue. Below is the scenario where I got the issue.

I have a cluster of 4 nodes with 2 bricks each on which I had 5 volumes. I stopped 5 volumes one by one and started yum update (from glusterfs 3.4.0.23 to glusterfs 3.4.0.24 on 3 nodes). On the other node when I checked "gluster v info" I saw the error mentioned in the bug.
Comment 4 maciej.galkiewicz 2013-09-16 05:34:15 EDT
Steps to reproduce:
# gluster volume info vol
Volume Name: vol
Type: Replicate
Volume ID: 16d51333-ef3b-4253-bbcc-59a9f393806b
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 172.16.0.4:/srv/glusterfs/vol
Brick2: 172.16.0.5:/srv/glusterfs/vol
# gluster volume stop vol
# gluster volume info vol
# gluster volume start vol
# gluster volume info vol
# gluster volume set vol auth.allow 172.16.0.138,172.16.0.6
Connection failed. Please check if gluster daemon is operational.

All commands where executed in script (without any sleep).

Just before the crash:

[2013-09-16 08:48:17.908841] W [socket.c:514:__socket_rwv] 0-roots-staging-client-0: readv failed (No data available)
[2013-09-16 08:48:20.909766] W [socket.c:514:__socket_rwv] 0-roots-staging-client-0: readv failed (No data available)
[2013-09-16 08:48:23.910556] W [socket.c:514:__socket_rwv] 0-roots-staging-client-0: readv failed (No data available)
[2013-09-16 08:48:25.887573] W [glusterfsd.c:1002:cleanup_and_exit] (-->/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7fd5cba4893d] (-->/lib/x86_64-linux-gnu/libpthrea
d.so.0(+0x7e0e) [0x7fd5cc105e0e] (-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xd5) [0x7fd5ccbf8545]))) 0-: received signum (15), shutting down
Comment 5 maciej.galkiewicz 2013-09-23 05:42:14 EDT
Any progress? Glusterfs is for me quite useless right now.
Comment 6 Atin Mukherjee 2014-01-02 07:18:35 EST
Its not reproducible in the latest downstream build with the steps given. Request you to recheck and let me know if the behaviour persists.
Comment 7 Atin Mukherjee 2014-01-02 23:29:24 EST
As I am not able to reproduce the defect, can you please retry it and let me know the exact steps?
Comment 9 Vivek Agarwal 2015-12-03 12:10:53 EST
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.