Bug 1225565 - [Backup]: RFE - Glusterfind CLI commands need to respond based on volume's start/stop state
Summary: [Backup]: RFE - Glusterfind CLI commands need to respond based on volume's st...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterfind
Version: 3.7.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
Assignee: Milind Changire
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On: 1225564
Blocks: glusterfs-3.7.2
TreeView+ depends on / blocked
 
Reported: 2015-05-27 16:42 UTC by Aravinda VK
Modified: 2015-06-20 09:48 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.7.2
Doc Type: Enhancement
Doc Text:
Clone Of: 1225564
Environment:
Last Closed: 2015-06-20 09:48:40 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Aravinda VK 2015-05-27 16:42:00 UTC
+++ This bug was initially created as a clone of Bug #1225564 +++

+++ This bug was initially created as a clone of Bug #1224236 +++

Description of problem:
If a volume is in a stopped state, glusterfind create succeeds (!). And the pre fails with the error 'Changelog register failed- Connection refused'. Post and delete succeeds. It is required to have a uniform behaviour of glusterfind CLI commands, depending on the start/stop state of the volume. 

This could be further enhanced to gracefully handle the scenario where the volume goes into a stopped state, in the middle of a pre command being run.

Version-Release number of selected component (if applicable):


How reproducible: Always


Steps to Reproduce:
1. Create a new volume. Leave it in a stopped state
2. Create a glusterfind session. That succeeds.
3. Execute glusterfind pre and that fails with an error. 
4. Delete the glusterfind session and the session gets deleted.


Expected results:

* Step 3 should gracefully fail with a more related error.
* All glusterfind CLI commands - create/pre/post/delete/list - should have uniform behaviour based on the state of the volume.

Additional info:

[root@dhcp43-140 ~]# 
[root@dhcp43-140 ~]# gluster v stop nash
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: nash: success
[root@dhcp43-140 ~]# glusterfind pre sess_nash nash /tmp/out.txt --regenerate-outfile
10.70.43.140 - pre failed: /rhs/thinbrick2/nash/dd Changelog register failed: [Errno 111] Connection refused

10.70.43.140 - pre failed: /rhs/thinbrick1/nash/dd Changelog register failed: [Errno 111] Connection refused

10.70.42.75 - pre failed: [2015-05-22 16:26:00.321920] I [event-epoll.c:629:event_dispatch_epoll_worker] 0-epoll: Started thread with index 3
/rhs/thinbrick1/nash/dd Changelog register failed: [Errno 111] Connection refused

10.70.42.75 - pre failed: /rhs/thinbrick2/nash/dd Changelog register failed: [Errno 111] Connection refused

Generated output file /tmp/out.txt
[root@dhcp43-140 ~]#


--- Additional comment from Aravinda VK on 2015-05-27 12:39:44 EDT ---

One more validation to glusterfind commands, Check Volume status before executing create and pre commands.

XPath in volume info: volInfo/volumes/volume/statusStr

--- Additional comment from Aravinda VK on 2015-05-27 12:40:13 EDT ---

volInfo/volumes/volume/statusStr should be in Started state.

Comment 1 Niels de Vos 2015-06-02 08:20:21 UTC
The required changes to fix this bug have not made it into glusterfs-3.7.1. This bug is now getting tracked for glusterfs-3.7.2.

Comment 2 Anand Avati 2015-06-11 14:00:03 UTC
REVIEW: http://review.gluster.org/11187 (tools/glusterfind: verifying volume is online) posted (#1) for review on release-3.7 by Milind Changire (mchangir)

Comment 3 Niels de Vos 2015-06-20 09:48:40 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.2, please reopen this bug report.

glusterfs-3.7.2 has been announced on the Gluster Packaging mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/packaging/2015-June/000006.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.