Bug 1479664 - create update delete 100 volumes, at last there are 4 volumes left
Summary: create update delete 100 volumes, at last there are 4 volumes left
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.10
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-08-09 06:08 UTC by lei.zhou
Modified: 2017-08-17 02:59 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-16 08:01:22 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
nodes glusterd log and cmd history (2.55 MB, application/zip)
2017-08-10 08:18 UTC, lei.zhou
no flags Details
sorry,make up nodes cmd_history.log (6.77 MB, application/zip)
2017-08-11 02:07 UTC, lei.zhou
no flags Details

Description lei.zhou 2017-08-09 06:08:17 UTC
Description of problem:

[root@node-3:~]$ gluster --version
glusterfs 3.10.3

gluster can not handle a lots of continuous operations!
eg. create update delete 100 volumes, at last there are 4 volumes left

How reproducible:

Steps to Reproduce:
1.create 100 volumes
2.update 100 quota of volumes
3.delete 100 volumes 

Actual results:
[root@node-3:~]$ gluster volume list
05fccafb-810b-4dab-8f24-7149617afa9f
65298ee6-b321-495b-8ba9-93c7c7342e85
865925c7-9914-4c55-a51e-65eb52e256df
b00d8572-513a-4614-a661-05c5f1bc2eac

Expected results:
[root@node-3:~]$ gluster volume list

Additional info:

see glusterd.log in attachment

Comment 1 Atin Mukherjee 2017-08-10 07:51:22 UTC
please attach glusterd and cmd_history log files from all the nodes.

Comment 2 lei.zhou 2017-08-10 08:18:11 UTC
Created attachment 1311627 [details]
nodes glusterd log and cmd history

Comment 3 lei.zhou 2017-08-11 02:07:36 UTC
Created attachment 1311937 [details]
sorry,make up nodes cmd_history.log

Comment 4 Atin Mukherjee 2017-08-16 08:01:22 UTC
This is not a bug. What has happened is the volumes which are still visible through gluster volume list were never deleted as the delete operation on the volumes failed. For eg:

from 136.136.136.146

[2017-08-09 02:53:52.693510]  : volume stop 05fccafb-810b-4dab-8f24-7149617afa9f : FAILED : Another transaction is in progress for     05fccafb-810b-4dab-8f24-7149617afa9f. Please try again after sometime.                                                                 
[2017-08-09 02:53:52.722636]  : volume delete 05fccafb-810b-4dab-8f24-7149617afa9f : FAILED : Another transaction is in progress for   05fccafb-810b-4dab-8f24-7149617afa9f. Please try again after sometime.


Now why that can happen is when two nodes try to initiate transaction on the same volume, one of them would succeed and one of them wouldn't. From the other node's cmd_history.log file I can see that at the same time volume status command was triggered which resulted into glusterd taking lock on the volume 05fccafb-810b-4dab-8f24-7149617afa9f in this node and this resulted into volume stop and delete to fail.

Comment 5 lei.zhou 2017-08-17 02:59:17 UTC
I use gstatus to report glusterfs server status per 30 seconds,Is that the reason?

Could glusterfs handle  more than one volumes operation simultaneously?

If there are a plenty of volumes to CRUD,Could the parallel processing performance of glusterfs improve?


Note You need to log in before you can comment on or make changes to this bug.