Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1479664

Summary: create update delete 100 volumes, at last there are 4 volumes left
Product: [Community] GlusterFS Reporter: lei.zhou <zhou.lei56>
Component: glusterdAssignee: bugs <bugs>
Status: CLOSED NOTABUG QA Contact:
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.10CC: amukherj, bugs, zhou.lei56
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-08-16 08:01:22 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
nodes glusterd log and cmd history
none
sorry,make up nodes cmd_history.log none

Description lei.zhou 2017-08-09 06:08:17 UTC
Description of problem:

[root@node-3:~]$ gluster --version
glusterfs 3.10.3

gluster can not handle a lots of continuous operations!
eg. create update delete 100 volumes, at last there are 4 volumes left

How reproducible:

Steps to Reproduce:
1.create 100 volumes
2.update 100 quota of volumes
3.delete 100 volumes 

Actual results:
[root@node-3:~]$ gluster volume list
05fccafb-810b-4dab-8f24-7149617afa9f
65298ee6-b321-495b-8ba9-93c7c7342e85
865925c7-9914-4c55-a51e-65eb52e256df
b00d8572-513a-4614-a661-05c5f1bc2eac

Expected results:
[root@node-3:~]$ gluster volume list

Additional info:

see glusterd.log in attachment

Comment 1 Atin Mukherjee 2017-08-10 07:51:22 UTC
please attach glusterd and cmd_history log files from all the nodes.

Comment 2 lei.zhou 2017-08-10 08:18:11 UTC
Created attachment 1311627 [details]
nodes glusterd log and cmd history

Comment 3 lei.zhou 2017-08-11 02:07:36 UTC
Created attachment 1311937 [details]
sorry,make up nodes cmd_history.log

Comment 4 Atin Mukherjee 2017-08-16 08:01:22 UTC
This is not a bug. What has happened is the volumes which are still visible through gluster volume list were never deleted as the delete operation on the volumes failed. For eg:

from 136.136.136.146

[2017-08-09 02:53:52.693510]  : volume stop 05fccafb-810b-4dab-8f24-7149617afa9f : FAILED : Another transaction is in progress for     05fccafb-810b-4dab-8f24-7149617afa9f. Please try again after sometime.                                                                 
[2017-08-09 02:53:52.722636]  : volume delete 05fccafb-810b-4dab-8f24-7149617afa9f : FAILED : Another transaction is in progress for   05fccafb-810b-4dab-8f24-7149617afa9f. Please try again after sometime.


Now why that can happen is when two nodes try to initiate transaction on the same volume, one of them would succeed and one of them wouldn't. From the other node's cmd_history.log file I can see that at the same time volume status command was triggered which resulted into glusterd taking lock on the volume 05fccafb-810b-4dab-8f24-7149617afa9f in this node and this resulted into volume stop and delete to fail.

Comment 5 lei.zhou 2017-08-17 02:59:17 UTC
I use gstatus to report glusterfs server status per 30 seconds,Is that the reason?

Could glusterfs handle  more than one volumes operation simultaneously?

If there are a plenty of volumes to CRUD,Could the parallel processing performance of glusterfs improve?