Bug 1479664 - create update delete 100 volumes, at last there are 4 volumes left
create update delete 100 volumes, at last there are 4 volumes left
Status: CLOSED NOTABUG
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
3.10
x86_64 Linux
unspecified Severity medium
: ---
: ---
Assigned To: bugs@gluster.org
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-08-09 02:08 EDT by lei.zhou
Modified: 2017-08-16 22:59 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-08-16 04:01:22 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
nodes glusterd log and cmd history (2.55 MB, application/zip)
2017-08-10 04:18 EDT, lei.zhou
no flags Details
sorry,make up nodes cmd_history.log (6.77 MB, application/zip)
2017-08-10 22:07 EDT, lei.zhou
no flags Details

  None (edit)
Description lei.zhou 2017-08-09 02:08:17 EDT
Description of problem:

[root@node-3:~]$ gluster --version
glusterfs 3.10.3

gluster can not handle a lots of continuous operations!
eg. create update delete 100 volumes, at last there are 4 volumes left

How reproducible:

Steps to Reproduce:
1.create 100 volumes
2.update 100 quota of volumes
3.delete 100 volumes 

Actual results:
[root@node-3:~]$ gluster volume list
05fccafb-810b-4dab-8f24-7149617afa9f
65298ee6-b321-495b-8ba9-93c7c7342e85
865925c7-9914-4c55-a51e-65eb52e256df
b00d8572-513a-4614-a661-05c5f1bc2eac

Expected results:
[root@node-3:~]$ gluster volume list

Additional info:

see glusterd.log in attachment
Comment 1 Atin Mukherjee 2017-08-10 03:51:22 EDT
please attach glusterd and cmd_history log files from all the nodes.
Comment 2 lei.zhou 2017-08-10 04:18 EDT
Created attachment 1311627 [details]
nodes glusterd log and cmd history
Comment 3 lei.zhou 2017-08-10 22:07 EDT
Created attachment 1311937 [details]
sorry,make up nodes cmd_history.log
Comment 4 Atin Mukherjee 2017-08-16 04:01:22 EDT
This is not a bug. What has happened is the volumes which are still visible through gluster volume list were never deleted as the delete operation on the volumes failed. For eg:

from 136.136.136.146

[2017-08-09 02:53:52.693510]  : volume stop 05fccafb-810b-4dab-8f24-7149617afa9f : FAILED : Another transaction is in progress for     05fccafb-810b-4dab-8f24-7149617afa9f. Please try again after sometime.                                                                 
[2017-08-09 02:53:52.722636]  : volume delete 05fccafb-810b-4dab-8f24-7149617afa9f : FAILED : Another transaction is in progress for   05fccafb-810b-4dab-8f24-7149617afa9f. Please try again after sometime.


Now why that can happen is when two nodes try to initiate transaction on the same volume, one of them would succeed and one of them wouldn't. From the other node's cmd_history.log file I can see that at the same time volume status command was triggered which resulted into glusterd taking lock on the volume 05fccafb-810b-4dab-8f24-7149617afa9f in this node and this resulted into volume stop and delete to fail.
Comment 5 lei.zhou 2017-08-16 22:59:17 EDT
I use gstatus to report glusterfs server status per 30 seconds,Is that the reason?

Could glusterfs handle  more than one volumes operation simultaneously?

If there are a plenty of volumes to CRUD,Could the parallel processing performance of glusterfs improve?

Note You need to log in before you can comment on or make changes to this bug.